Test Report: Docker_Linux_crio_arm64 20427

                    
                      a480bdc5e776ed1bdb04039eceacb0c7aced7f2e:2025-02-17:38392
                    
                

Test fail (1/331)

Order failed test Duration
36 TestAddons/parallel/Ingress 154.24
x
+
TestAddons/parallel/Ingress (154.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-925274 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-925274 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-925274 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f50412e5-1cb1-4c3c-8e3b-2dc65155ac8e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f50412e5-1cb1-4c3c-8e3b-2dc65155ac8e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003692599s
I0217 12:37:48.166435  860382 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-925274 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.376375492s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-925274 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-925274
helpers_test.go:235: (dbg) docker inspect addons-925274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b40238cd75d5fee38cec2ca2e7d2104a6411c259bb772ffbf35325be607f380",
	        "Created": "2025-02-17T12:33:51.395232815Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 861662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-17T12:33:51.5560469Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:86f383d95829214691bb905fe90945d8bf2efbbe5a717e0830a616744d143ec9",
	        "ResolvConfPath": "/var/lib/docker/containers/9b40238cd75d5fee38cec2ca2e7d2104a6411c259bb772ffbf35325be607f380/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b40238cd75d5fee38cec2ca2e7d2104a6411c259bb772ffbf35325be607f380/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b40238cd75d5fee38cec2ca2e7d2104a6411c259bb772ffbf35325be607f380/hosts",
	        "LogPath": "/var/lib/docker/containers/9b40238cd75d5fee38cec2ca2e7d2104a6411c259bb772ffbf35325be607f380/9b40238cd75d5fee38cec2ca2e7d2104a6411c259bb772ffbf35325be607f380-json.log",
	        "Name": "/addons-925274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-925274:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-925274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/98f880b7600778d607445e1a47173ebe9e608d20c55c9f0f5cef562b574a076f-init/diff:/var/lib/docker/overlay2/1848d59bca4a2021bcbb29c87acc643922fd4dba99de17894dc1bd9977cabbd3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98f880b7600778d607445e1a47173ebe9e608d20c55c9f0f5cef562b574a076f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98f880b7600778d607445e1a47173ebe9e608d20c55c9f0f5cef562b574a076f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98f880b7600778d607445e1a47173ebe9e608d20c55c9f0f5cef562b574a076f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-925274",
	                "Source": "/var/lib/docker/volumes/addons-925274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-925274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-925274",
	                "name.minikube.sigs.k8s.io": "addons-925274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c2d66b6dc25ea574f6f203f4a245981d82df57a844c16785a8df7aaa456f068",
	            "SandboxKey": "/var/run/docker/netns/4c2d66b6dc25",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33873"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33874"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33875"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33876"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-925274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6cb7c51a87e376fadbf6d908daaa7cb36b0ee818a97dc4322a3cf748b49e0196",
	                    "EndpointID": "322253ff0a1008b1654ddca2689c897c0961cb48cf496e56fe9d558df940cdb2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-925274",
	                        "9b40238cd75d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-925274 -n addons-925274
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-925274 logs -n 25: (1.911009608s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-118950                                                                     | download-only-118950   | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC | 17 Feb 25 12:33 UTC |
	| start   | --download-only -p                                                                          | download-docker-432282 | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC |                     |
	|         | download-docker-432282                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-432282                                                                   | download-docker-432282 | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC | 17 Feb 25 12:33 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-810958   | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC |                     |
	|         | binary-mirror-810958                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43611                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-810958                                                                     | binary-mirror-810958   | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC | 17 Feb 25 12:33 UTC |
	| addons  | disable dashboard -p                                                                        | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC |                     |
	|         | addons-925274                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC |                     |
	|         | addons-925274                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-925274 --wait=true                                                                | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC | 17 Feb 25 12:36 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-925274 addons disable                                                                | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:36 UTC | 17 Feb 25 12:36 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-925274 addons disable                                                                | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:36 UTC | 17 Feb 25 12:36 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:36 UTC | 17 Feb 25 12:36 UTC |
	|         | -p addons-925274                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-925274 addons disable                                                                | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:36 UTC | 17 Feb 25 12:37 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-925274 ip                                                                            | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:37 UTC | 17 Feb 25 12:37 UTC |
	| addons  | addons-925274 addons disable                                                                | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:37 UTC | 17 Feb 25 12:37 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-925274 addons disable                                                                | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:37 UTC | 17 Feb 25 12:37 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-925274 addons                                                                        | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:37 UTC | 17 Feb 25 12:37 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-925274 ssh cat                                                                       | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:37 UTC | 17 Feb 25 12:37 UTC |
	|         | /opt/local-path-provisioner/pvc-b18182c0-ddd9-4a0d-a7b0-1917dbefd7b4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-925274 addons disable                                                                | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:37 UTC | 17 Feb 25 12:37 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-925274 addons                                                                        | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:37 UTC | 17 Feb 25 12:37 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-925274 addons                                                                        | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:37 UTC | 17 Feb 25 12:37 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-925274 addons                                                                        | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:37 UTC | 17 Feb 25 12:37 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-925274 ssh curl -s                                                                   | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:37 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-925274 addons                                                                        | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:38 UTC | 17 Feb 25 12:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-925274 addons                                                                        | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:38 UTC | 17 Feb 25 12:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-925274 ip                                                                            | addons-925274          | jenkins | v1.35.0 | 17 Feb 25 12:39 UTC | 17 Feb 25 12:39 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 12:33:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 12:33:27.078498  861156 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:33:27.078624  861156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:33:27.078635  861156 out.go:358] Setting ErrFile to fd 2...
	I0217 12:33:27.078640  861156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:33:27.078908  861156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
	I0217 12:33:27.079373  861156 out.go:352] Setting JSON to false
	I0217 12:33:27.080271  861156 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18955,"bootTime":1739776652,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0217 12:33:27.080350  861156 start.go:139] virtualization:  
	I0217 12:33:27.083657  861156 out.go:177] * [addons-925274] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0217 12:33:27.087363  861156 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 12:33:27.087445  861156 notify.go:220] Checking for updates...
	I0217 12:33:27.093201  861156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 12:33:27.096048  861156 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	I0217 12:33:27.098979  861156 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	I0217 12:33:27.101836  861156 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0217 12:33:27.104658  861156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 12:33:27.107696  861156 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 12:33:27.136461  861156 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 12:33:27.136630  861156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:33:27.192397  861156 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2025-02-17 12:33:27.183607536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:33:27.192535  861156 docker.go:318] overlay module found
	I0217 12:33:27.195694  861156 out.go:177] * Using the docker driver based on user configuration
	I0217 12:33:27.198559  861156 start.go:297] selected driver: docker
	I0217 12:33:27.198578  861156 start.go:901] validating driver "docker" against <nil>
	I0217 12:33:27.198594  861156 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 12:33:27.199329  861156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:33:27.254533  861156 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2025-02-17 12:33:27.245081614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:33:27.254751  861156 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0217 12:33:27.255004  861156 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0217 12:33:27.257876  861156 out.go:177] * Using Docker driver with root privileges
	I0217 12:33:27.260707  861156 cni.go:84] Creating CNI manager for ""
	I0217 12:33:27.260773  861156 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0217 12:33:27.260787  861156 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0217 12:33:27.260876  861156 start.go:340] cluster config:
	{Name:addons-925274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-925274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 12:33:27.265843  861156 out.go:177] * Starting "addons-925274" primary control-plane node in "addons-925274" cluster
	I0217 12:33:27.268837  861156 cache.go:121] Beginning downloading kic base image for docker with crio
	I0217 12:33:27.271759  861156 out.go:177] * Pulling base image v0.0.46-1739182054-20387 ...
	I0217 12:33:27.274540  861156 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0217 12:33:27.274593  861156 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-855004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0217 12:33:27.274607  861156 cache.go:56] Caching tarball of preloaded images
	I0217 12:33:27.274638  861156 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
	I0217 12:33:27.274699  861156 preload.go:172] Found /home/jenkins/minikube-integration/20427-855004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0217 12:33:27.274708  861156 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0217 12:33:27.275082  861156 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/config.json ...
	I0217 12:33:27.275151  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/config.json: {Name:mkaa248c712951a44eabec9cde8888b755ae7106 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:27.290368  861156 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad to local cache
	I0217 12:33:27.290508  861156 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local cache directory
	I0217 12:33:27.290528  861156 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local cache directory, skipping pull
	I0217 12:33:27.290533  861156 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad exists in cache, skipping pull
	I0217 12:33:27.290540  861156 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad as a tarball
	I0217 12:33:27.290545  861156 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad from local cache
	I0217 12:33:44.702740  861156 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad from cached tarball
	I0217 12:33:44.702785  861156 cache.go:230] Successfully downloaded all kic artifacts
	I0217 12:33:44.702829  861156 start.go:360] acquireMachinesLock for addons-925274: {Name:mk0b34b3571fc4c26c7222b9b26ba6aa80feed34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 12:33:44.702952  861156 start.go:364] duration metric: took 97.631µs to acquireMachinesLock for "addons-925274"
	I0217 12:33:44.702983  861156 start.go:93] Provisioning new machine with config: &{Name:addons-925274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-925274 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0217 12:33:44.703063  861156 start.go:125] createHost starting for "" (driver="docker")
	I0217 12:33:44.706440  861156 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0217 12:33:44.706697  861156 start.go:159] libmachine.API.Create for "addons-925274" (driver="docker")
	I0217 12:33:44.706733  861156 client.go:168] LocalClient.Create starting
	I0217 12:33:44.706840  861156 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20427-855004/.minikube/certs/ca.pem
	I0217 12:33:44.941191  861156 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20427-855004/.minikube/certs/cert.pem
	I0217 12:33:45.544670  861156 cli_runner.go:164] Run: docker network inspect addons-925274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0217 12:33:45.559696  861156 cli_runner.go:211] docker network inspect addons-925274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0217 12:33:45.559780  861156 network_create.go:284] running [docker network inspect addons-925274] to gather additional debugging logs...
	I0217 12:33:45.559801  861156 cli_runner.go:164] Run: docker network inspect addons-925274
	W0217 12:33:45.573640  861156 cli_runner.go:211] docker network inspect addons-925274 returned with exit code 1
	I0217 12:33:45.573673  861156 network_create.go:287] error running [docker network inspect addons-925274]: docker network inspect addons-925274: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-925274 not found
	I0217 12:33:45.573686  861156 network_create.go:289] output of [docker network inspect addons-925274]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-925274 not found
	
	** /stderr **
	I0217 12:33:45.573787  861156 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0217 12:33:45.590085  861156 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d9140}
	I0217 12:33:45.590128  861156 network_create.go:124] attempt to create docker network addons-925274 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0217 12:33:45.590191  861156 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-925274 addons-925274
	I0217 12:33:45.658110  861156 network_create.go:108] docker network addons-925274 192.168.49.0/24 created
	I0217 12:33:45.658146  861156 kic.go:121] calculated static IP "192.168.49.2" for the "addons-925274" container
	I0217 12:33:45.658220  861156 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0217 12:33:45.673657  861156 cli_runner.go:164] Run: docker volume create addons-925274 --label name.minikube.sigs.k8s.io=addons-925274 --label created_by.minikube.sigs.k8s.io=true
	I0217 12:33:45.691539  861156 oci.go:103] Successfully created a docker volume addons-925274
	I0217 12:33:45.691632  861156 cli_runner.go:164] Run: docker run --rm --name addons-925274-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-925274 --entrypoint /usr/bin/test -v addons-925274:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -d /var/lib
	I0217 12:33:47.145709  861156 cli_runner.go:217] Completed: docker run --rm --name addons-925274-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-925274 --entrypoint /usr/bin/test -v addons-925274:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -d /var/lib: (1.454039998s)
	I0217 12:33:47.145741  861156 oci.go:107] Successfully prepared a docker volume addons-925274
	I0217 12:33:47.145771  861156 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0217 12:33:47.145792  861156 kic.go:194] Starting extracting preloaded images to volume ...
	I0217 12:33:47.145883  861156 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20427-855004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-925274:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -I lz4 -xf /preloaded.tar -C /extractDir
	I0217 12:33:51.328603  861156 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20427-855004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-925274:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -I lz4 -xf /preloaded.tar -C /extractDir: (4.182677681s)
	I0217 12:33:51.328644  861156 kic.go:203] duration metric: took 4.182848277s to extract preloaded images to volume ...
	W0217 12:33:51.328782  861156 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0217 12:33:51.328897  861156 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0217 12:33:51.380337  861156 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-925274 --name addons-925274 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-925274 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-925274 --network addons-925274 --ip 192.168.49.2 --volume addons-925274:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad
	I0217 12:33:51.754678  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Running}}
	I0217 12:33:51.780933  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:33:51.804503  861156 cli_runner.go:164] Run: docker exec addons-925274 stat /var/lib/dpkg/alternatives/iptables
	I0217 12:33:51.868780  861156 oci.go:144] the created container "addons-925274" has a running status.
	I0217 12:33:51.868812  861156 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa...
	I0217 12:33:52.561850  861156 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0217 12:33:52.583062  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:33:52.601785  861156 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0217 12:33:52.601805  861156 kic_runner.go:114] Args: [docker exec --privileged addons-925274 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0217 12:33:52.654827  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:33:52.676062  861156 machine.go:93] provisionDockerMachine start ...
	I0217 12:33:52.676162  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:33:52.695680  861156 main.go:141] libmachine: Using SSH client type: native
	I0217 12:33:52.695973  861156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I0217 12:33:52.695990  861156 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 12:33:52.831888  861156 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-925274
	
	I0217 12:33:52.831914  861156 ubuntu.go:169] provisioning hostname "addons-925274"
	I0217 12:33:52.831986  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:33:52.852940  861156 main.go:141] libmachine: Using SSH client type: native
	I0217 12:33:52.853191  861156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I0217 12:33:52.853212  861156 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-925274 && echo "addons-925274" | sudo tee /etc/hostname
	I0217 12:33:52.996316  861156 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-925274
	
	I0217 12:33:52.996401  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:33:53.015373  861156 main.go:141] libmachine: Using SSH client type: native
	I0217 12:33:53.015621  861156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I0217 12:33:53.015644  861156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-925274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-925274/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-925274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 12:33:53.144194  861156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 12:33:53.144226  861156 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20427-855004/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-855004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-855004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-855004/.minikube}
	I0217 12:33:53.144255  861156 ubuntu.go:177] setting up certificates
	I0217 12:33:53.144265  861156 provision.go:84] configureAuth start
	I0217 12:33:53.144336  861156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-925274
	I0217 12:33:53.161499  861156 provision.go:143] copyHostCerts
	I0217 12:33:53.161585  861156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-855004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-855004/.minikube/ca.pem (1082 bytes)
	I0217 12:33:53.161706  861156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-855004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-855004/.minikube/cert.pem (1123 bytes)
	I0217 12:33:53.161769  861156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-855004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-855004/.minikube/key.pem (1675 bytes)
	I0217 12:33:53.161821  861156 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-855004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-855004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-855004/.minikube/certs/ca-key.pem org=jenkins.addons-925274 san=[127.0.0.1 192.168.49.2 addons-925274 localhost minikube]
	I0217 12:33:53.652684  861156 provision.go:177] copyRemoteCerts
	I0217 12:33:53.652749  861156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 12:33:53.652789  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:33:53.668657  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:33:53.760377  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 12:33:53.783605  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0217 12:33:53.807628  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0217 12:33:53.831415  861156 provision.go:87] duration metric: took 687.124982ms to configureAuth
	I0217 12:33:53.831445  861156 ubuntu.go:193] setting minikube options for container-runtime
	I0217 12:33:53.831631  861156 config.go:182] Loaded profile config "addons-925274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0217 12:33:53.831744  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:33:53.848584  861156 main.go:141] libmachine: Using SSH client type: native
	I0217 12:33:53.848862  861156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I0217 12:33:53.848884  861156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0217 12:33:54.097256  861156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0217 12:33:54.097331  861156 machine.go:96] duration metric: took 1.421239974s to provisionDockerMachine
	I0217 12:33:54.097359  861156 client.go:171] duration metric: took 9.390618091s to LocalClient.Create
	I0217 12:33:54.097416  861156 start.go:167] duration metric: took 9.390718545s to libmachine.API.Create "addons-925274"
	I0217 12:33:54.097429  861156 start.go:293] postStartSetup for "addons-925274" (driver="docker")
	I0217 12:33:54.097441  861156 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 12:33:54.097523  861156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 12:33:54.097566  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:33:54.114995  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:33:54.209695  861156 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 12:33:54.213199  861156 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0217 12:33:54.213238  861156 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0217 12:33:54.213250  861156 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0217 12:33:54.213260  861156 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0217 12:33:54.213273  861156 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-855004/.minikube/addons for local assets ...
	I0217 12:33:54.213344  861156 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-855004/.minikube/files for local assets ...
	I0217 12:33:54.213367  861156 start.go:296] duration metric: took 115.931378ms for postStartSetup
	I0217 12:33:54.213690  861156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-925274
	I0217 12:33:54.230836  861156 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/config.json ...
	I0217 12:33:54.231119  861156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:33:54.231180  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:33:54.248479  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:33:54.336791  861156 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0217 12:33:54.341672  861156 start.go:128] duration metric: took 9.638592526s to createHost
	I0217 12:33:54.341695  861156 start.go:83] releasing machines lock for "addons-925274", held for 9.638729294s
	I0217 12:33:54.341766  861156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-925274
	I0217 12:33:54.358490  861156 ssh_runner.go:195] Run: cat /version.json
	I0217 12:33:54.358542  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:33:54.358847  861156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 12:33:54.358913  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:33:54.376798  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:33:54.389465  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:33:54.471295  861156 ssh_runner.go:195] Run: systemctl --version
	I0217 12:33:54.604340  861156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0217 12:33:54.746565  861156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0217 12:33:54.750878  861156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 12:33:54.771811  861156 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0217 12:33:54.771953  861156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 12:33:54.810295  861156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0217 12:33:54.810321  861156 start.go:495] detecting cgroup driver to use...
	I0217 12:33:54.810355  861156 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0217 12:33:54.810406  861156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 12:33:54.827323  861156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 12:33:54.839617  861156 docker.go:217] disabling cri-docker service (if available) ...
	I0217 12:33:54.839684  861156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0217 12:33:54.854179  861156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0217 12:33:54.869591  861156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0217 12:33:54.958586  861156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0217 12:33:55.044837  861156 docker.go:233] disabling docker service ...
	I0217 12:33:55.044938  861156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0217 12:33:55.065244  861156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0217 12:33:55.078536  861156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0217 12:33:55.165160  861156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0217 12:33:55.260999  861156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0217 12:33:55.278227  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 12:33:55.295571  861156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0217 12:33:55.295666  861156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0217 12:33:55.305941  861156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0217 12:33:55.306043  861156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0217 12:33:55.316306  861156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0217 12:33:55.326286  861156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0217 12:33:55.336411  861156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 12:33:55.346090  861156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0217 12:33:55.356684  861156 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0217 12:33:55.373371  861156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0217 12:33:55.383536  861156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 12:33:55.392439  861156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 12:33:55.401017  861156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 12:33:55.486475  861156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0217 12:33:55.595482  861156 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0217 12:33:55.595675  861156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0217 12:33:55.599296  861156 start.go:563] Will wait 60s for crictl version
	I0217 12:33:55.599357  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:33:55.602986  861156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0217 12:33:55.640651  861156 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0217 12:33:55.640750  861156 ssh_runner.go:195] Run: crio --version
	I0217 12:33:55.683330  861156 ssh_runner.go:195] Run: crio --version
	I0217 12:33:55.726200  861156 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0217 12:33:55.729238  861156 cli_runner.go:164] Run: docker network inspect addons-925274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0217 12:33:55.745860  861156 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0217 12:33:55.749648  861156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 12:33:55.760674  861156 kubeadm.go:883] updating cluster {Name:addons-925274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-925274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0217 12:33:55.760795  861156 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0217 12:33:55.760857  861156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0217 12:33:55.837269  861156 crio.go:514] all images are preloaded for cri-o runtime.
	I0217 12:33:55.837296  861156 crio.go:433] Images already preloaded, skipping extraction
	I0217 12:33:55.837357  861156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0217 12:33:55.875085  861156 crio.go:514] all images are preloaded for cri-o runtime.
	I0217 12:33:55.875107  861156 cache_images.go:84] Images are preloaded, skipping loading
	I0217 12:33:55.875115  861156 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.1 crio true true} ...
	I0217 12:33:55.875214  861156 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-925274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-925274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0217 12:33:55.875300  861156 ssh_runner.go:195] Run: crio config
	I0217 12:33:55.923201  861156 cni.go:84] Creating CNI manager for ""
	I0217 12:33:55.923224  861156 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0217 12:33:55.923235  861156 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0217 12:33:55.923260  861156 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-925274 NodeName:addons-925274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0217 12:33:55.923396  861156 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-925274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0217 12:33:55.923478  861156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0217 12:33:55.932327  861156 binaries.go:44] Found k8s binaries, skipping transfer
	I0217 12:33:55.932422  861156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0217 12:33:55.941110  861156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0217 12:33:55.959320  861156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0217 12:33:55.977975  861156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0217 12:33:55.996297  861156 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0217 12:33:55.999754  861156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 12:33:56.012517  861156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 12:33:56.094716  861156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0217 12:33:56.108598  861156 certs.go:68] Setting up /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274 for IP: 192.168.49.2
	I0217 12:33:56.108628  861156 certs.go:194] generating shared ca certs ...
	I0217 12:33:56.108644  861156 certs.go:226] acquiring lock for ca certs: {Name:mk662801a6b6ea928dee860e03864a785a67a922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:56.108835  861156 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20427-855004/.minikube/ca.key
	I0217 12:33:56.525710  861156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20427-855004/.minikube/ca.crt ...
	I0217 12:33:56.525757  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/ca.crt: {Name:mk92ebc49bdf2959a64a5f26af755977bd57e2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:56.526639  861156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20427-855004/.minikube/ca.key ...
	I0217 12:33:56.526657  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/ca.key: {Name:mkb8032b5cbc29d502c82653ae5986d420aa2896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:56.527377  861156 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20427-855004/.minikube/proxy-client-ca.key
	I0217 12:33:56.732517  861156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20427-855004/.minikube/proxy-client-ca.crt ...
	I0217 12:33:56.732546  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/proxy-client-ca.crt: {Name:mk358dc814921d3918298aa728fa334b8daf89d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:56.733407  861156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20427-855004/.minikube/proxy-client-ca.key ...
	I0217 12:33:56.733424  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/proxy-client-ca.key: {Name:mk563c883851f5e5e1886e4a708a8ff39aa97424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:56.733520  861156 certs.go:256] generating profile certs ...
	I0217 12:33:56.733585  861156 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.key
	I0217 12:33:56.733605  861156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt with IP's: []
	I0217 12:33:56.872740  861156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt ...
	I0217 12:33:56.872770  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: {Name:mk8f894a217be713fd7757b4e98a60c703a5f128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:56.872968  861156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.key ...
	I0217 12:33:56.872986  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.key: {Name:mk4a84c127850c1e2fd92ea3feb210698022b631 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:56.873696  861156 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.key.a5770c2f
	I0217 12:33:56.873723  861156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.crt.a5770c2f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0217 12:33:57.048853  861156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.crt.a5770c2f ...
	I0217 12:33:57.048884  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.crt.a5770c2f: {Name:mk94c45dd684dd2232c6a70e504ac60ec9bf8dc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:57.049667  861156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.key.a5770c2f ...
	I0217 12:33:57.049738  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.key.a5770c2f: {Name:mka4cd958c43d1179e3e8489c448d43f32278450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:57.049833  861156 certs.go:381] copying /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.crt.a5770c2f -> /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.crt
	I0217 12:33:57.049922  861156 certs.go:385] copying /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.key.a5770c2f -> /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.key
	I0217 12:33:57.049975  861156 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/proxy-client.key
	I0217 12:33:57.049995  861156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/proxy-client.crt with IP's: []
	I0217 12:33:57.816909  861156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/proxy-client.crt ...
	I0217 12:33:57.816950  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/proxy-client.crt: {Name:mkbf947723f30f2cc1daca11a2b00d7081fabea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:57.817201  861156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/proxy-client.key ...
	I0217 12:33:57.817215  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/proxy-client.key: {Name:mkc37b5a9defe212fc182746853684f5d0527b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:33:57.817419  861156 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-855004/.minikube/certs/ca-key.pem (1675 bytes)
	I0217 12:33:57.817458  861156 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-855004/.minikube/certs/ca.pem (1082 bytes)
	I0217 12:33:57.817482  861156 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-855004/.minikube/certs/cert.pem (1123 bytes)
	I0217 12:33:57.817506  861156 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-855004/.minikube/certs/key.pem (1675 bytes)
	I0217 12:33:57.818171  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0217 12:33:57.843677  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0217 12:33:57.868056  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0217 12:33:57.892310  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0217 12:33:57.917092  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0217 12:33:57.942181  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0217 12:33:57.965970  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0217 12:33:57.988386  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0217 12:33:58.012892  861156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-855004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0217 12:33:58.038269  861156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0217 12:33:58.056685  861156 ssh_runner.go:195] Run: openssl version
	I0217 12:33:58.061970  861156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0217 12:33:58.071684  861156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0217 12:33:58.075414  861156 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 17 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0217 12:33:58.075487  861156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0217 12:33:58.082699  861156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0217 12:33:58.092385  861156 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0217 12:33:58.095649  861156 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0217 12:33:58.095697  861156 kubeadm.go:392] StartCluster: {Name:addons-925274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-925274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 12:33:58.095775  861156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0217 12:33:58.095851  861156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0217 12:33:58.138333  861156 cri.go:89] found id: ""
	I0217 12:33:58.138402  861156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0217 12:33:58.147599  861156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0217 12:33:58.156923  861156 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0217 12:33:58.157002  861156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0217 12:33:58.165805  861156 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0217 12:33:58.165826  861156 kubeadm.go:157] found existing configuration files:
	
	I0217 12:33:58.165902  861156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0217 12:33:58.174530  861156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0217 12:33:58.174645  861156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0217 12:33:58.183331  861156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0217 12:33:58.192015  861156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0217 12:33:58.192084  861156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0217 12:33:58.200494  861156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0217 12:33:58.209996  861156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0217 12:33:58.210081  861156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0217 12:33:58.218367  861156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0217 12:33:58.227151  861156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0217 12:33:58.227241  861156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0217 12:33:58.235987  861156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0217 12:33:58.281599  861156 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0217 12:33:58.281881  861156 kubeadm.go:310] [preflight] Running pre-flight checks
	I0217 12:33:58.315650  861156 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0217 12:33:58.315726  861156 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1077-aws
	I0217 12:33:58.315769  861156 kubeadm.go:310] OS: Linux
	I0217 12:33:58.315836  861156 kubeadm.go:310] CGROUPS_CPU: enabled
	I0217 12:33:58.315890  861156 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0217 12:33:58.315942  861156 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0217 12:33:58.315994  861156 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0217 12:33:58.316046  861156 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0217 12:33:58.316100  861156 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0217 12:33:58.316148  861156 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0217 12:33:58.316203  861156 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0217 12:33:58.316252  861156 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0217 12:33:58.382428  861156 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0217 12:33:58.382547  861156 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0217 12:33:58.382640  861156 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0217 12:33:58.392189  861156 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0217 12:33:58.398077  861156 out.go:235]   - Generating certificates and keys ...
	I0217 12:33:58.398275  861156 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0217 12:33:58.398396  861156 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0217 12:33:58.866365  861156 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0217 12:33:59.091814  861156 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0217 12:33:59.252777  861156 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0217 12:33:59.501536  861156 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0217 12:33:59.833755  861156 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0217 12:33:59.834094  861156 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-925274 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0217 12:34:00.087063  861156 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0217 12:34:00.087202  861156 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-925274 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0217 12:34:01.105392  861156 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0217 12:34:01.502524  861156 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0217 12:34:01.974611  861156 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0217 12:34:01.975220  861156 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0217 12:34:02.205366  861156 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0217 12:34:03.200758  861156 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0217 12:34:03.580862  861156 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0217 12:34:03.926610  861156 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0217 12:34:04.414637  861156 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0217 12:34:04.415489  861156 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0217 12:34:04.418574  861156 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0217 12:34:04.421919  861156 out.go:235]   - Booting up control plane ...
	I0217 12:34:04.422034  861156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0217 12:34:04.422138  861156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0217 12:34:04.424124  861156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0217 12:34:04.434638  861156 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0217 12:34:04.440828  861156 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0217 12:34:04.441070  861156 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0217 12:34:04.531330  861156 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0217 12:34:04.531454  861156 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0217 12:34:05.532999  861156 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001761439s
	I0217 12:34:05.533090  861156 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0217 12:34:11.534532  861156 kubeadm.go:310] [api-check] The API server is healthy after 6.001514861s
	I0217 12:34:11.553820  861156 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0217 12:34:11.568337  861156 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0217 12:34:11.595045  861156 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0217 12:34:11.595252  861156 kubeadm.go:310] [mark-control-plane] Marking the node addons-925274 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0217 12:34:11.606713  861156 kubeadm.go:310] [bootstrap-token] Using token: c8b2sa.sup8mu7h2jor0tye
	I0217 12:34:11.611717  861156 out.go:235]   - Configuring RBAC rules ...
	I0217 12:34:11.611877  861156 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0217 12:34:11.617012  861156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0217 12:34:11.627112  861156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0217 12:34:11.630722  861156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0217 12:34:11.637226  861156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0217 12:34:11.642125  861156 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0217 12:34:11.943251  861156 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0217 12:34:12.382726  861156 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0217 12:34:12.943929  861156 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0217 12:34:12.945125  861156 kubeadm.go:310] 
	I0217 12:34:12.945197  861156 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0217 12:34:12.945203  861156 kubeadm.go:310] 
	I0217 12:34:12.945280  861156 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0217 12:34:12.945285  861156 kubeadm.go:310] 
	I0217 12:34:12.945311  861156 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0217 12:34:12.945370  861156 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0217 12:34:12.945421  861156 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0217 12:34:12.945425  861156 kubeadm.go:310] 
	I0217 12:34:12.945479  861156 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0217 12:34:12.945484  861156 kubeadm.go:310] 
	I0217 12:34:12.945531  861156 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0217 12:34:12.945536  861156 kubeadm.go:310] 
	I0217 12:34:12.945588  861156 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0217 12:34:12.945678  861156 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0217 12:34:12.945746  861156 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0217 12:34:12.945751  861156 kubeadm.go:310] 
	I0217 12:34:12.945834  861156 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0217 12:34:12.945911  861156 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0217 12:34:12.945915  861156 kubeadm.go:310] 
	I0217 12:34:12.945999  861156 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c8b2sa.sup8mu7h2jor0tye \
	I0217 12:34:12.946103  861156 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2f7347e88346ac952df4eb8e80406ca0ed126f31748bfb58bddc62f389d7d4dc \
	I0217 12:34:12.946124  861156 kubeadm.go:310] 	--control-plane 
	I0217 12:34:12.946128  861156 kubeadm.go:310] 
	I0217 12:34:12.946212  861156 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0217 12:34:12.946216  861156 kubeadm.go:310] 
	I0217 12:34:12.946297  861156 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c8b2sa.sup8mu7h2jor0tye \
	I0217 12:34:12.946400  861156 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2f7347e88346ac952df4eb8e80406ca0ed126f31748bfb58bddc62f389d7d4dc 
	I0217 12:34:12.949132  861156 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0217 12:34:12.949358  861156 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1077-aws\n", err: exit status 1
	I0217 12:34:12.949467  861156 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0217 12:34:12.949490  861156 cni.go:84] Creating CNI manager for ""
	I0217 12:34:12.949504  861156 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0217 12:34:12.952549  861156 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0217 12:34:12.955347  861156 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0217 12:34:12.960148  861156 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0217 12:34:12.960169  861156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0217 12:34:12.977976  861156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0217 12:34:13.253788  861156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0217 12:34:13.253859  861156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0217 12:34:13.253968  861156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-925274 minikube.k8s.io/updated_at=2025_02_17T12_34_13_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=d5460083481c20438a5263486cb626e4191c2126 minikube.k8s.io/name=addons-925274 minikube.k8s.io/primary=true
	I0217 12:34:13.409965  861156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0217 12:34:13.410024  861156 ops.go:34] apiserver oom_adj: -16
	I0217 12:34:13.910835  861156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0217 12:34:14.410573  861156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0217 12:34:14.910297  861156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0217 12:34:15.410879  861156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0217 12:34:15.910327  861156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0217 12:34:16.410978  861156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0217 12:34:16.494254  861156 kubeadm.go:1113] duration metric: took 3.240461962s to wait for elevateKubeSystemPrivileges
	I0217 12:34:16.494281  861156 kubeadm.go:394] duration metric: took 18.398588521s to StartCluster
	I0217 12:34:16.494298  861156 settings.go:142] acquiring lock: {Name:mkada87bee97dd8a67a8f36fa408e92130831c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:34:16.494430  861156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20427-855004/kubeconfig
	I0217 12:34:16.494836  861156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-855004/kubeconfig: {Name:mk0d1c4db365c88e0e9cbae220c492698abccf34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 12:34:16.495047  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0217 12:34:16.495075  861156 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0217 12:34:16.495296  861156 config.go:182] Loaded profile config "addons-925274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0217 12:34:16.495336  861156 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0217 12:34:16.495421  861156 addons.go:69] Setting yakd=true in profile "addons-925274"
	I0217 12:34:16.495434  861156 addons.go:238] Setting addon yakd=true in "addons-925274"
	I0217 12:34:16.495460  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.495974  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.496312  861156 addons.go:69] Setting inspektor-gadget=true in profile "addons-925274"
	I0217 12:34:16.496336  861156 addons.go:238] Setting addon inspektor-gadget=true in "addons-925274"
	I0217 12:34:16.496370  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.496806  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.497302  861156 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-925274"
	I0217 12:34:16.497325  861156 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-925274"
	I0217 12:34:16.497351  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.497763  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.501218  861156 addons.go:69] Setting metrics-server=true in profile "addons-925274"
	I0217 12:34:16.501294  861156 addons.go:238] Setting addon metrics-server=true in "addons-925274"
	I0217 12:34:16.501367  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.501886  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.502743  861156 addons.go:69] Setting cloud-spanner=true in profile "addons-925274"
	I0217 12:34:16.502780  861156 addons.go:238] Setting addon cloud-spanner=true in "addons-925274"
	I0217 12:34:16.502811  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.503267  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.514922  861156 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-925274"
	I0217 12:34:16.515066  861156 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-925274"
	I0217 12:34:16.515111  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.515589  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.527952  861156 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-925274"
	I0217 12:34:16.528000  861156 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-925274"
	I0217 12:34:16.528057  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.528582  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.528834  861156 addons.go:69] Setting default-storageclass=true in profile "addons-925274"
	I0217 12:34:16.528888  861156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-925274"
	I0217 12:34:16.529217  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.550226  861156 addons.go:69] Setting registry=true in profile "addons-925274"
	I0217 12:34:16.550305  861156 addons.go:238] Setting addon registry=true in "addons-925274"
	I0217 12:34:16.550356  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.550898  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.551095  861156 addons.go:69] Setting gcp-auth=true in profile "addons-925274"
	I0217 12:34:16.551136  861156 mustload.go:65] Loading cluster: addons-925274
	I0217 12:34:16.551341  861156 config.go:182] Loaded profile config "addons-925274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0217 12:34:16.551677  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.578571  861156 addons.go:69] Setting ingress=true in profile "addons-925274"
	I0217 12:34:16.578669  861156 addons.go:238] Setting addon ingress=true in "addons-925274"
	I0217 12:34:16.578751  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.579086  861156 addons.go:69] Setting storage-provisioner=true in profile "addons-925274"
	I0217 12:34:16.579108  861156 addons.go:238] Setting addon storage-provisioner=true in "addons-925274"
	I0217 12:34:16.579137  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.579572  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.593177  861156 addons.go:69] Setting ingress-dns=true in profile "addons-925274"
	I0217 12:34:16.593230  861156 addons.go:238] Setting addon ingress-dns=true in "addons-925274"
	I0217 12:34:16.593274  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.593773  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.598704  861156 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-925274"
	I0217 12:34:16.598736  861156 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-925274"
	I0217 12:34:16.600664  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.615939  861156 addons.go:69] Setting volcano=true in profile "addons-925274"
	I0217 12:34:16.615972  861156 addons.go:238] Setting addon volcano=true in "addons-925274"
	I0217 12:34:16.616013  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.616507  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.621622  861156 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0217 12:34:16.624295  861156 out.go:177] * Verifying Kubernetes components...
	I0217 12:34:16.624480  861156 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0217 12:34:16.624510  861156 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0217 12:34:16.624581  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.629519  861156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 12:34:16.637682  861156 addons.go:69] Setting volumesnapshots=true in profile "addons-925274"
	I0217 12:34:16.637729  861156 addons.go:238] Setting addon volumesnapshots=true in "addons-925274"
	I0217 12:34:16.637772  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.638267  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.640843  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.645247  861156 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0217 12:34:16.668536  861156 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0217 12:34:16.668551  861156 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0217 12:34:16.674763  861156 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0217 12:34:16.674902  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0217 12:34:16.675003  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.686370  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.687445  861156 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0217 12:34:16.687499  861156 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0217 12:34:16.687584  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.750758  861156 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0217 12:34:16.763710  861156 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0217 12:34:16.763785  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0217 12:34:16.763905  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.764105  861156 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0217 12:34:16.773651  861156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0217 12:34:16.781346  861156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0217 12:34:16.784411  861156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0217 12:34:16.812796  861156 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0217 12:34:16.812823  861156 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0217 12:34:16.812902  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.815408  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	W0217 12:34:16.819057  861156 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0217 12:34:16.825029  861156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0217 12:34:16.830404  861156 addons.go:238] Setting addon default-storageclass=true in "addons-925274"
	I0217 12:34:16.830454  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.830989  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.831253  861156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0217 12:34:16.831436  861156 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0217 12:34:16.831450  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0217 12:34:16.831505  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.849249  861156 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0217 12:34:16.850979  861156 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0217 12:34:16.856730  861156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0217 12:34:16.857315  861156 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0217 12:34:16.857336  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0217 12:34:16.857405  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.859365  861156 out.go:177]   - Using image docker.io/registry:2.8.3
	I0217 12:34:16.859469  861156 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0217 12:34:16.863337  861156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0217 12:34:16.863470  861156 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0217 12:34:16.863485  861156 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0217 12:34:16.863559  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.869056  861156 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0217 12:34:16.869167  861156 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0217 12:34:16.869210  861156 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0217 12:34:16.873799  861156 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0217 12:34:16.873825  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0217 12:34:16.873894  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.880347  861156 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0217 12:34:16.880372  861156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0217 12:34:16.880438  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.901151  861156 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0217 12:34:16.904033  861156 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0217 12:34:16.905007  861156 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0217 12:34:16.905325  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0217 12:34:16.906094  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.938186  861156 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0217 12:34:16.938259  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0217 12:34:16.938357  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:16.939447  861156 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-925274"
	I0217 12:34:16.939516  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:16.940066  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:16.971298  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:16.983248  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.023386  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.053655  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.080017  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0217 12:34:17.080229  861156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0217 12:34:17.080635  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.081757  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.085944  861156 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0217 12:34:17.085962  861156 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0217 12:34:17.086023  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:17.098695  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.117669  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.124017  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.124778  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.133348  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.161314  861156 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0217 12:34:17.167928  861156 out.go:177]   - Using image docker.io/busybox:stable
	I0217 12:34:17.168140  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.170928  861156 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0217 12:34:17.170950  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0217 12:34:17.171024  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:17.207315  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:17.257997  861156 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0217 12:34:17.258072  861156 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0217 12:34:17.332640  861156 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0217 12:34:17.332660  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0217 12:34:17.410104  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0217 12:34:17.442429  861156 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0217 12:34:17.442510  861156 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0217 12:34:17.455623  861156 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0217 12:34:17.455694  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0217 12:34:17.522147  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0217 12:34:17.558929  861156 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0217 12:34:17.559007  861156 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0217 12:34:17.581949  861156 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0217 12:34:17.582023  861156 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0217 12:34:17.582220  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0217 12:34:17.589877  861156 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0217 12:34:17.589951  861156 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0217 12:34:17.598992  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0217 12:34:17.636273  861156 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0217 12:34:17.636347  861156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0217 12:34:17.662063  861156 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0217 12:34:17.662138  861156 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0217 12:34:17.682370  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0217 12:34:17.693879  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0217 12:34:17.697501  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0217 12:34:17.706326  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0217 12:34:17.723948  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0217 12:34:17.754208  861156 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0217 12:34:17.754232  861156 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0217 12:34:17.780102  861156 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0217 12:34:17.780126  861156 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0217 12:34:17.784381  861156 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0217 12:34:17.784402  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0217 12:34:17.830580  861156 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0217 12:34:17.830654  861156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0217 12:34:17.859624  861156 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0217 12:34:17.859694  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0217 12:34:17.934077  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0217 12:34:17.978098  861156 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0217 12:34:17.978171  861156 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0217 12:34:17.985347  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0217 12:34:18.022586  861156 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0217 12:34:18.022664  861156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0217 12:34:18.069445  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0217 12:34:18.167209  861156 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0217 12:34:18.167291  861156 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0217 12:34:18.185862  861156 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0217 12:34:18.185942  861156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0217 12:34:18.380059  861156 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0217 12:34:18.380129  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0217 12:34:18.386288  861156 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0217 12:34:18.386359  861156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0217 12:34:18.443074  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0217 12:34:18.494687  861156 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0217 12:34:18.494757  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0217 12:34:18.621077  861156 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0217 12:34:18.621150  861156 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0217 12:34:18.772562  861156 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0217 12:34:18.772622  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0217 12:34:18.911253  861156 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0217 12:34:18.911322  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0217 12:34:19.146057  861156 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0217 12:34:19.146133  861156 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0217 12:34:19.343936  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0217 12:34:19.815033  861156 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.734748654s)
	I0217 12:34:19.816022  861156 node_ready.go:35] waiting up to 6m0s for node "addons-925274" to be "Ready" ...
	I0217 12:34:19.816269  861156 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.736178845s)
	I0217 12:34:19.816326  861156 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0217 12:34:20.440865  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.0307257s)
	I0217 12:34:20.686964  861156 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-925274" context rescaled to 1 replicas
	I0217 12:34:20.769202  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.246969192s)
	I0217 12:34:21.373884  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.791615549s)
	I0217 12:34:21.935661  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:22.778889  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.17981936s)
	I0217 12:34:22.779061  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.096619749s)
	I0217 12:34:22.779146  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.085195068s)
	I0217 12:34:23.879147  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.172740453s)
	I0217 12:34:23.879400  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.181826198s)
	I0217 12:34:23.879535  861156 addons.go:479] Verifying addon ingress=true in "addons-925274"
	I0217 12:34:23.879925  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.155906783s)
	I0217 12:34:23.880092  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.945911069s)
	I0217 12:34:23.880123  861156 addons.go:479] Verifying addon metrics-server=true in "addons-925274"
	I0217 12:34:23.880178  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.894757489s)
	I0217 12:34:23.880331  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.810793804s)
	I0217 12:34:23.880465  861156 addons.go:479] Verifying addon registry=true in "addons-925274"
	I0217 12:34:23.883182  861156 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-925274 service yakd-dashboard -n yakd-dashboard
	
	I0217 12:34:23.883350  861156 out.go:177] * Verifying ingress addon...
	I0217 12:34:23.885107  861156 out.go:177] * Verifying registry addon...
	I0217 12:34:23.887922  861156 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0217 12:34:23.888059  861156 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0217 12:34:23.893969  861156 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0217 12:34:23.893991  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:23.899286  861156 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0217 12:34:23.899306  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0217 12:34:23.907949  861156 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0217 12:34:23.913708  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.470545284s)
	W0217 12:34:23.913799  861156 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0217 12:34:23.913866  861156 retry.go:31] will retry after 147.558003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0217 12:34:24.061971  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0217 12:34:24.343002  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:24.344141  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.000124867s)
	I0217 12:34:24.344218  861156 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-925274"
	I0217 12:34:24.350071  861156 out.go:177] * Verifying csi-hostpath-driver addon...
	I0217 12:34:24.355189  861156 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0217 12:34:24.363088  861156 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0217 12:34:24.363158  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:24.461885  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:24.462063  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:24.858779  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:24.891570  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:24.892137  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:25.358704  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:25.391639  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:25.392061  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:25.859248  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:25.891774  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:25.892137  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:26.362276  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:26.391099  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:26.391335  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:26.819871  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:26.864389  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:26.892631  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:26.893841  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:26.911003  861156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.848936309s)
	I0217 12:34:26.933885  861156 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0217 12:34:26.933968  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:26.955101  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:27.059526  861156 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0217 12:34:27.084609  861156 addons.go:238] Setting addon gcp-auth=true in "addons-925274"
	I0217 12:34:27.084712  861156 host.go:66] Checking if "addons-925274" exists ...
	I0217 12:34:27.085226  861156 cli_runner.go:164] Run: docker container inspect addons-925274 --format={{.State.Status}}
	I0217 12:34:27.103084  861156 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0217 12:34:27.103147  861156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-925274
	I0217 12:34:27.121826  861156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/addons-925274/id_rsa Username:docker}
	I0217 12:34:27.226898  861156 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0217 12:34:27.229756  861156 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0217 12:34:27.232589  861156 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0217 12:34:27.232615  861156 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0217 12:34:27.251756  861156 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0217 12:34:27.251782  861156 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0217 12:34:27.270926  861156 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0217 12:34:27.270949  861156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0217 12:34:27.290859  861156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0217 12:34:27.359390  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:27.392379  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:27.393233  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:27.815281  861156 addons.go:479] Verifying addon gcp-auth=true in "addons-925274"
	I0217 12:34:27.818379  861156 out.go:177] * Verifying gcp-auth addon...
	I0217 12:34:27.822162  861156 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0217 12:34:27.828282  861156 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0217 12:34:27.828312  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:27.928228  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:27.928512  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:27.928914  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:28.325791  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:28.358476  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:28.391454  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:28.391492  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:28.825735  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:28.858801  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:28.891960  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:28.892504  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:29.319255  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:29.324981  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:29.358665  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:29.391935  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:29.392102  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:29.825016  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:29.858960  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:29.891113  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:29.891851  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:30.326012  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:30.358925  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:30.392037  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:30.392105  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:30.826116  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:30.927582  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:30.927964  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:30.928054  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:31.319583  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:31.325363  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:31.358303  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:31.392024  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:31.392161  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:31.825851  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:31.858836  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:31.891863  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:31.892162  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:32.325976  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:32.358909  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:32.391532  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:32.393734  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:32.827138  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:32.858495  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:32.891676  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:32.891894  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:33.319633  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:33.325024  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:33.358912  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:33.392091  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:33.392324  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:33.825287  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:33.859164  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:33.891185  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:33.891314  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:34.325145  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:34.358137  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:34.391155  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:34.391878  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:34.825918  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:34.858478  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:34.891704  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:34.891765  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:35.325058  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:35.359187  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:35.391699  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:35.391955  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:35.818751  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:35.825270  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:35.858949  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:35.892037  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:35.892357  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:36.325330  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:36.358063  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:36.390987  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:36.391123  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:36.828882  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:36.858710  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:36.891972  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:36.892410  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:37.325114  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:37.358872  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:37.391635  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:37.391798  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:37.820000  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:37.825726  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:37.858585  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:37.892006  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:37.892308  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:38.325807  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:38.358821  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:38.392305  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:38.392695  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:38.825434  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:38.857999  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:38.892129  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:38.892628  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:39.325675  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:39.359565  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:39.392683  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:39.394920  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:39.825377  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:39.857830  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:39.892056  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:39.892335  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:40.319452  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:40.325348  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:40.358208  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:40.391300  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:40.391452  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:40.825608  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:40.858095  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:40.891135  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:40.891339  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:41.325523  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:41.358115  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:41.391310  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:41.391560  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:41.825482  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:41.858189  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:41.890938  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:41.891050  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:42.319503  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:42.325971  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:42.358909  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:42.391672  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:42.392170  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:42.825933  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:42.858859  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:42.892098  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:42.892402  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:43.325047  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:43.358797  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:43.392111  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:43.392421  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:43.825354  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:43.858285  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:43.891438  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:43.891755  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:44.325696  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:44.358250  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:44.391151  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:44.391744  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:44.818808  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:44.825167  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:44.858718  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:44.891745  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:44.891955  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:45.325381  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:45.357980  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:45.391137  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:45.391291  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:45.825097  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:45.871884  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:45.892888  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:45.893104  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:46.325435  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:46.357955  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:46.391397  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:46.391631  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:46.819187  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:46.825060  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:46.859085  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:46.891147  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:46.891364  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:47.325508  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:47.358374  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:47.391630  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:47.392064  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:47.826081  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:47.859128  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:47.891544  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:47.891631  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:48.324950  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:48.358444  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:48.392182  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:48.392248  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:48.819548  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:48.825269  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:48.858720  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:48.891881  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:48.892060  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:49.325651  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:49.358660  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:49.392013  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:49.392237  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:49.825444  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:49.858072  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:49.891328  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:49.891437  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:50.324773  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:50.358568  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:50.391343  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:50.391992  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:50.826048  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:50.858933  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:50.891358  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:50.891547  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:51.319512  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:51.325778  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:51.359282  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:51.391935  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:51.392017  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:51.825524  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:51.858238  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:51.891433  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:51.891573  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:52.325572  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:52.358500  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:52.391789  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:52.392266  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:52.826052  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:52.858836  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:52.891734  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:52.891952  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:53.325351  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:53.358255  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:53.391015  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:53.391637  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:53.819569  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:53.825583  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:53.858569  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:53.891720  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:53.891919  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:54.326013  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:54.358441  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:54.392974  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:54.393387  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:54.825417  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:54.858439  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:54.892134  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:54.892512  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:55.324901  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:55.358819  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:55.391697  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:55.392287  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:55.825409  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:55.858002  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:55.891070  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:55.891428  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:56.319883  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:56.325362  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:56.358041  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:56.390955  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:56.391327  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:56.826706  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:56.858498  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:56.892103  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:56.892868  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:57.324973  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:57.359038  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:57.391415  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:57.391557  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:57.826462  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:57.858062  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:57.891232  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:57.891755  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:58.325569  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:58.358511  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:58.391328  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:58.391878  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:58.819150  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:34:58.825676  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:58.858625  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:58.891376  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:58.892004  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:59.325272  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:59.358132  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:59.391374  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:59.391746  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:34:59.825534  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:34:59.858862  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:34:59.890873  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:34:59.891088  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:00.325986  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:00.359552  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:00.391929  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:00.392427  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:00.819308  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:35:00.825132  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:00.859071  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:00.891584  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:00.891755  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:01.326249  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:01.359436  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:01.392475  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:01.392608  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:01.827695  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:01.860475  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:01.892617  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:01.892837  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:02.325688  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:02.358671  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:02.392056  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:02.392491  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:02.819678  861156 node_ready.go:53] node "addons-925274" has status "Ready":"False"
	I0217 12:35:02.825518  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:02.858495  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:02.891910  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:02.892348  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:03.325628  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:03.358612  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:03.392115  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:03.392215  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:03.841528  861156 node_ready.go:49] node "addons-925274" has status "Ready":"True"
	I0217 12:35:03.841595  861156 node_ready.go:38] duration metric: took 44.025502619s for node "addons-925274" to be "Ready" ...
	I0217 12:35:03.841620  861156 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0217 12:35:03.848917  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:03.856288  861156 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rvcxj" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:03.873462  861156 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0217 12:35:03.873489  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:03.928320  861156 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0217 12:35:03.928355  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:03.928795  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:04.394168  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:04.396511  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:04.427351  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:04.429702  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:04.826347  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:04.860213  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:04.892129  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:04.892282  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:05.328886  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:05.360536  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:05.392908  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:05.393321  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:05.830030  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:05.862876  861156 pod_ready.go:103] pod "coredns-668d6bf9bc-rvcxj" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:05.931135  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:05.931493  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:05.931628  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:06.325376  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:06.358560  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:06.393238  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:06.393335  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:06.825427  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:06.859257  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:06.861799  861156 pod_ready.go:93] pod "coredns-668d6bf9bc-rvcxj" in "kube-system" namespace has status "Ready":"True"
	I0217 12:35:06.861822  861156 pod_ready.go:82] duration metric: took 3.005499734s for pod "coredns-668d6bf9bc-rvcxj" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:06.861852  861156 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-925274" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:06.866719  861156 pod_ready.go:93] pod "etcd-addons-925274" in "kube-system" namespace has status "Ready":"True"
	I0217 12:35:06.866744  861156 pod_ready.go:82] duration metric: took 4.883028ms for pod "etcd-addons-925274" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:06.866758  861156 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-925274" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:06.871808  861156 pod_ready.go:93] pod "kube-apiserver-addons-925274" in "kube-system" namespace has status "Ready":"True"
	I0217 12:35:06.871853  861156 pod_ready.go:82] duration metric: took 5.085968ms for pod "kube-apiserver-addons-925274" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:06.871867  861156 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-925274" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:06.876542  861156 pod_ready.go:93] pod "kube-controller-manager-addons-925274" in "kube-system" namespace has status "Ready":"True"
	I0217 12:35:06.876566  861156 pod_ready.go:82] duration metric: took 4.67514ms for pod "kube-controller-manager-addons-925274" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:06.876582  861156 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9mkwr" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:06.881315  861156 pod_ready.go:93] pod "kube-proxy-9mkwr" in "kube-system" namespace has status "Ready":"True"
	I0217 12:35:06.881342  861156 pod_ready.go:82] duration metric: took 4.752168ms for pod "kube-proxy-9mkwr" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:06.881353  861156 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-925274" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:06.891274  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:06.892054  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:07.260138  861156 pod_ready.go:93] pod "kube-scheduler-addons-925274" in "kube-system" namespace has status "Ready":"True"
	I0217 12:35:07.260163  861156 pod_ready.go:82] duration metric: took 378.802456ms for pod "kube-scheduler-addons-925274" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:07.260175  861156 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-m6442" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:07.325913  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:07.358904  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:07.391700  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:07.391881  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:07.826544  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:07.858555  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:07.892535  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:07.892872  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:08.325801  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:08.359800  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:08.392256  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:08.392859  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:08.826481  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:08.861154  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:08.892458  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:08.893401  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:09.266204  861156 pod_ready.go:103] pod "metrics-server-7fbb699795-m6442" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:09.325521  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:09.359697  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:09.393982  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:09.394372  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:09.826238  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:09.860107  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:09.896125  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:09.896797  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:10.326164  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:10.359900  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:10.392336  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:10.392663  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:10.828322  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:10.861515  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:10.894120  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:10.895246  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:11.267100  861156 pod_ready.go:103] pod "metrics-server-7fbb699795-m6442" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:11.326633  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:11.360292  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:11.393770  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:11.394123  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:11.826689  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:11.859394  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:11.898980  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:11.899779  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:12.325887  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:12.360005  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:12.392629  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:12.394577  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:12.826233  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:12.860632  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:12.895061  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:12.895683  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:13.326567  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:13.361064  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:13.393596  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:13.394132  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:13.793988  861156 pod_ready.go:103] pod "metrics-server-7fbb699795-m6442" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:13.864270  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:13.867997  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:13.893938  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:13.894003  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:14.325585  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:14.360729  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:14.393718  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:14.394067  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:14.825867  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:14.858875  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:14.891675  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:14.891911  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:15.325873  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:15.359090  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:15.392425  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:15.392804  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:15.826418  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:15.859185  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:15.893450  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:15.893964  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:16.267109  861156 pod_ready.go:103] pod "metrics-server-7fbb699795-m6442" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:16.325080  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:16.359211  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:16.392622  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:16.392755  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:16.826206  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:16.860316  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:16.893767  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:16.894199  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:17.326768  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:17.359199  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:17.393188  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:17.393371  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:17.832909  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:17.862226  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:17.893428  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:17.893976  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:18.268542  861156 pod_ready.go:103] pod "metrics-server-7fbb699795-m6442" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:18.325231  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:18.359294  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:18.393207  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:18.393667  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:18.833793  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:18.881183  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:18.918000  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:18.918504  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:19.330368  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:19.361826  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:19.395916  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:19.396334  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:19.772653  861156 pod_ready.go:93] pod "metrics-server-7fbb699795-m6442" in "kube-system" namespace has status "Ready":"True"
	I0217 12:35:19.772729  861156 pod_ready.go:82] duration metric: took 12.512545832s for pod "metrics-server-7fbb699795-m6442" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:19.772758  861156 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:19.826179  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:19.859301  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:19.891313  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:19.891956  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:20.325936  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:20.359620  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:20.396172  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:20.397569  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:20.825801  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:20.858605  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:20.894187  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:20.894393  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:21.326553  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:21.358770  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:21.391670  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:21.392482  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:21.786082  861156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:21.826006  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:21.860822  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:21.896314  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:21.897526  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:22.326494  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:22.359134  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:22.392906  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:22.393321  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:22.829777  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:22.931022  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:22.931621  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:22.932403  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:23.326488  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:23.426928  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:23.427334  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:23.428650  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:23.826117  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:23.858454  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:23.927742  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:23.927793  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:24.278082  861156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:24.327305  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:24.358499  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:24.396335  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:24.396498  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:24.826391  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:24.859607  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:24.894376  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:24.894565  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:25.327986  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:25.359779  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:25.394061  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:25.394783  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:25.826875  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:25.859225  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:25.900703  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:25.900949  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:26.279412  861156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:26.326212  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:26.360193  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:26.394174  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:26.394522  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:26.829448  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:26.860592  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:26.892661  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:26.892671  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:27.326413  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:27.358774  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:27.393948  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:27.394483  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:27.826730  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:27.861774  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:27.893325  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:27.894374  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:28.326758  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:28.358704  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:28.392769  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:28.393135  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:28.783206  861156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:28.882725  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:28.987729  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:28.987940  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:28.988513  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:29.327341  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:29.358598  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:29.391952  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:29.392207  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:29.826015  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:29.859291  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:29.891370  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:29.891370  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:30.325858  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:30.359001  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:30.391501  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:30.391896  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:30.826056  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:30.859130  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:30.892026  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:30.892755  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:31.277820  861156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:31.332003  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:31.359202  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:31.393161  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:31.393619  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:31.825980  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:31.859745  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:31.893399  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:31.893847  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:32.327082  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:32.360143  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:32.392787  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:32.393280  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:32.825016  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:32.859382  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:32.892805  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:32.893044  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:33.279540  861156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:33.326204  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:33.360026  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:33.393539  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:33.393766  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:33.825969  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:33.859339  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:33.892978  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:33.893014  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:34.325956  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:34.360686  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:34.392699  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:34.394054  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:34.826160  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:34.863913  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:34.893977  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:34.894495  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:35.280712  861156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:35.326764  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:35.359290  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:35.393663  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:35.394113  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:35.825151  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:35.858188  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:35.892475  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:35.893329  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:36.325786  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:36.360698  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:36.393118  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:36.393511  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:36.828654  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:36.863057  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:36.894026  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:36.894553  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:37.335956  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:37.363906  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:37.391905  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:37.392263  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:37.779615  861156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:37.825235  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:37.860004  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:37.895170  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:37.895869  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:38.325775  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:38.361989  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:38.400010  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:38.401571  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:38.825605  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:38.859039  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:38.893336  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:38.893522  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:39.326432  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:39.362005  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:39.393355  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:39.395580  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:39.784174  861156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace has status "Ready":"False"
	I0217 12:35:39.839019  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:39.859950  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:39.896325  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:39.898826  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:40.326062  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:40.360087  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:40.392635  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:40.392770  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:40.825736  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:40.858725  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:40.892522  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:40.893189  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:41.326813  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:41.359421  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:41.391616  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:41.392087  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:41.825639  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:41.858833  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:41.891688  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:41.891972  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:42.278328  861156 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace has status "Ready":"True"
	I0217 12:35:42.278352  861156 pod_ready.go:82] duration metric: took 22.505573901s for pod "nvidia-device-plugin-daemonset-s4mtb" in "kube-system" namespace to be "Ready" ...
	I0217 12:35:42.278394  861156 pod_ready.go:39] duration metric: took 38.436729172s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0217 12:35:42.278416  861156 api_server.go:52] waiting for apiserver process to appear ...
	I0217 12:35:42.278482  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0217 12:35:42.278626  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0217 12:35:42.326978  861156 cri.go:89] found id: "4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb"
	I0217 12:35:42.327001  861156 cri.go:89] found id: ""
	I0217 12:35:42.327010  861156 logs.go:282] 1 containers: [4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb]
	I0217 12:35:42.327155  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:42.328681  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:42.331621  861156 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0217 12:35:42.331706  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0217 12:35:42.359329  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:42.398686  861156 cri.go:89] found id: "103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3"
	I0217 12:35:42.398713  861156 cri.go:89] found id: ""
	I0217 12:35:42.398721  861156 logs.go:282] 1 containers: [103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3]
	I0217 12:35:42.398778  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:42.402356  861156 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0217 12:35:42.402425  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0217 12:35:42.429351  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:42.429493  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:42.449903  861156 cri.go:89] found id: "0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99"
	I0217 12:35:42.449936  861156 cri.go:89] found id: ""
	I0217 12:35:42.449945  861156 logs.go:282] 1 containers: [0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99]
	I0217 12:35:42.450021  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:42.453606  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0217 12:35:42.453728  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0217 12:35:42.492443  861156 cri.go:89] found id: "b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d"
	I0217 12:35:42.492468  861156 cri.go:89] found id: ""
	I0217 12:35:42.492477  861156 logs.go:282] 1 containers: [b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d]
	I0217 12:35:42.492538  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:42.496017  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0217 12:35:42.496134  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0217 12:35:42.538973  861156 cri.go:89] found id: "0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557"
	I0217 12:35:42.538996  861156 cri.go:89] found id: ""
	I0217 12:35:42.539005  861156 logs.go:282] 1 containers: [0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557]
	I0217 12:35:42.539091  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:42.543638  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0217 12:35:42.543729  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0217 12:35:42.587257  861156 cri.go:89] found id: "917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea"
	I0217 12:35:42.587280  861156 cri.go:89] found id: ""
	I0217 12:35:42.587288  861156 logs.go:282] 1 containers: [917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea]
	I0217 12:35:42.587371  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:42.591284  861156 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0217 12:35:42.591380  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0217 12:35:42.650226  861156 cri.go:89] found id: "66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600"
	I0217 12:35:42.650255  861156 cri.go:89] found id: ""
	I0217 12:35:42.650267  861156 logs.go:282] 1 containers: [66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600]
	I0217 12:35:42.650401  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:42.660905  861156 logs.go:123] Gathering logs for kubelet ...
	I0217 12:35:42.660932  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0217 12:35:42.749980  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: W0217 12:34:18.617292    1499 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:35:42.750366  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: E0217 12:34:18.617349    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:35:42.752183  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: I0217 12:34:18.695991    1499 status_manager.go:890] "Failed to get status for pod" podUID="ce4fa4a1-1905-49a7-80c9-488708fea328" pod="kube-system/kube-proxy-9mkwr" err="pods \"kube-proxy-9mkwr\" is forbidden: User \"system:node:addons-925274\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object"
	W0217 12:35:42.754057  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: W0217 12:34:18.696089    1499 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:35:42.754305  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: E0217 12:34:18.696117    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:35:42.781830  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736039    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:35:42.782153  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736089    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:35:42.782355  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736151    1499 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-925274' and this object
	W0217 12:35:42.782648  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736164    1499 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	I0217 12:35:42.818048  861156 logs.go:123] Gathering logs for dmesg ...
	I0217 12:35:42.818139  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0217 12:35:42.826477  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:42.837132  861156 logs.go:123] Gathering logs for etcd [103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3] ...
	I0217 12:35:42.837165  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3"
	I0217 12:35:42.864596  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:42.895869  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:42.896845  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:42.924347  861156 logs.go:123] Gathering logs for kube-proxy [0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557] ...
	I0217 12:35:42.924394  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557"
	I0217 12:35:42.997632  861156 logs.go:123] Gathering logs for kube-controller-manager [917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea] ...
	I0217 12:35:42.997661  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea"
	I0217 12:35:43.083744  861156 logs.go:123] Gathering logs for container status ...
	I0217 12:35:43.083783  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0217 12:35:43.180538  861156 logs.go:123] Gathering logs for describe nodes ...
	I0217 12:35:43.180570  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0217 12:35:43.327492  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:43.360606  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:43.406029  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:43.406207  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:43.437406  861156 logs.go:123] Gathering logs for kube-apiserver [4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb] ...
	I0217 12:35:43.437441  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb"
	I0217 12:35:43.519936  861156 logs.go:123] Gathering logs for coredns [0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99] ...
	I0217 12:35:43.519976  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99"
	I0217 12:35:43.594778  861156 logs.go:123] Gathering logs for kube-scheduler [b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d] ...
	I0217 12:35:43.594808  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d"
	I0217 12:35:43.655775  861156 logs.go:123] Gathering logs for kindnet [66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600] ...
	I0217 12:35:43.655808  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600"
	I0217 12:35:43.713451  861156 logs.go:123] Gathering logs for CRI-O ...
	I0217 12:35:43.713486  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0217 12:35:43.829766  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:43.831969  861156 out.go:358] Setting ErrFile to fd 2...
	I0217 12:35:43.832014  861156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0217 12:35:43.832097  861156 out.go:270] X Problems detected in kubelet:
	W0217 12:35:43.832142  861156 out.go:270]   Feb 17 12:34:18 addons-925274 kubelet[1499]: E0217 12:34:18.696117    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:35:43.832312  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736039    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:35:43.832346  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736089    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:35:43.832395  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736151    1499 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-925274' and this object
	W0217 12:35:43.832429  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736164    1499 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	I0217 12:35:43.832472  861156 out.go:358] Setting ErrFile to fd 2...
	I0217 12:35:43.832502  861156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:35:43.861211  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:43.931307  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:43.931800  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:44.326054  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:44.359509  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:44.427192  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:44.427425  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:44.825156  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:44.858118  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:44.892086  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:44.893164  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:45.325441  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:45.358971  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:45.391631  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:45.391783  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:45.826053  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:45.859267  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:45.891949  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:45.892035  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:46.325694  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:46.358921  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:46.392301  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:46.393105  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:46.825819  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:46.864330  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:46.893222  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:46.894006  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:47.325530  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:47.359487  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:47.391888  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:47.391926  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:47.827890  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:47.927144  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0217 12:35:47.927261  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:47.927849  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:48.325229  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:48.358480  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:48.391509  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:48.392093  861156 kapi.go:107] duration metric: took 1m24.504037429s to wait for kubernetes.io/minikube-addons=registry ...
	I0217 12:35:48.824758  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:48.859142  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:48.890776  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:49.326813  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:49.360025  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:49.391759  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:49.825802  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:49.858927  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:49.891974  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:50.325378  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:50.358105  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:50.391800  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:50.825540  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:50.858622  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:50.891138  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:51.325623  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:51.358814  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:51.392577  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:51.825900  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:51.859670  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:51.891740  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:52.325908  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:52.359114  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:52.391816  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:52.825175  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:52.858252  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:52.890718  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:53.325629  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:53.359326  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:53.391536  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:53.824942  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:53.832961  861156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 12:35:53.847526  861156 api_server.go:72] duration metric: took 1m37.35242233s to wait for apiserver process to appear ...
	I0217 12:35:53.847552  861156 api_server.go:88] waiting for apiserver healthz status ...
	I0217 12:35:53.847608  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0217 12:35:53.847679  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0217 12:35:53.859755  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:53.886702  861156 cri.go:89] found id: "4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb"
	I0217 12:35:53.886726  861156 cri.go:89] found id: ""
	I0217 12:35:53.886735  861156 logs.go:282] 1 containers: [4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb]
	I0217 12:35:53.886803  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:53.891790  861156 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0217 12:35:53.891896  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0217 12:35:53.892573  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:53.934054  861156 cri.go:89] found id: "103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3"
	I0217 12:35:53.934080  861156 cri.go:89] found id: ""
	I0217 12:35:53.934088  861156 logs.go:282] 1 containers: [103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3]
	I0217 12:35:53.934179  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:53.937991  861156 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0217 12:35:53.938086  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0217 12:35:53.980284  861156 cri.go:89] found id: "0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99"
	I0217 12:35:53.980307  861156 cri.go:89] found id: ""
	I0217 12:35:53.980315  861156 logs.go:282] 1 containers: [0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99]
	I0217 12:35:53.980371  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:53.983870  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0217 12:35:53.983944  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0217 12:35:54.027308  861156 cri.go:89] found id: "b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d"
	I0217 12:35:54.027333  861156 cri.go:89] found id: ""
	I0217 12:35:54.027341  861156 logs.go:282] 1 containers: [b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d]
	I0217 12:35:54.027436  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:54.033203  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0217 12:35:54.033310  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0217 12:35:54.083322  861156 cri.go:89] found id: "0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557"
	I0217 12:35:54.083346  861156 cri.go:89] found id: ""
	I0217 12:35:54.083354  861156 logs.go:282] 1 containers: [0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557]
	I0217 12:35:54.083410  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:54.087664  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0217 12:35:54.087739  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0217 12:35:54.136990  861156 cri.go:89] found id: "917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea"
	I0217 12:35:54.137014  861156 cri.go:89] found id: ""
	I0217 12:35:54.137022  861156 logs.go:282] 1 containers: [917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea]
	I0217 12:35:54.137103  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:54.140656  861156 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0217 12:35:54.140730  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0217 12:35:54.181502  861156 cri.go:89] found id: "66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600"
	I0217 12:35:54.181524  861156 cri.go:89] found id: ""
	I0217 12:35:54.181532  861156 logs.go:282] 1 containers: [66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600]
	I0217 12:35:54.181618  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:35:54.185530  861156 logs.go:123] Gathering logs for dmesg ...
	I0217 12:35:54.185560  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0217 12:35:54.201660  861156 logs.go:123] Gathering logs for describe nodes ...
	I0217 12:35:54.201694  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0217 12:35:54.328116  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:54.341824  861156 logs.go:123] Gathering logs for etcd [103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3] ...
	I0217 12:35:54.341855  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3"
	I0217 12:35:54.360888  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:54.390909  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:54.411569  861156 logs.go:123] Gathering logs for coredns [0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99] ...
	I0217 12:35:54.411607  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99"
	I0217 12:35:54.451009  861156 logs.go:123] Gathering logs for kube-scheduler [b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d] ...
	I0217 12:35:54.451036  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d"
	I0217 12:35:54.508742  861156 logs.go:123] Gathering logs for kube-controller-manager [917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea] ...
	I0217 12:35:54.508780  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea"
	I0217 12:35:54.587855  861156 logs.go:123] Gathering logs for kindnet [66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600] ...
	I0217 12:35:54.587889  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600"
	I0217 12:35:54.628993  861156 logs.go:123] Gathering logs for kubelet ...
	I0217 12:35:54.629075  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0217 12:35:54.690986  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: W0217 12:34:18.617292    1499 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:35:54.691314  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: E0217 12:34:18.617349    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:35:54.692653  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: I0217 12:34:18.695991    1499 status_manager.go:890] "Failed to get status for pod" podUID="ce4fa4a1-1905-49a7-80c9-488708fea328" pod="kube-system/kube-proxy-9mkwr" err="pods \"kube-proxy-9mkwr\" is forbidden: User \"system:node:addons-925274\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object"
	W0217 12:35:54.692876  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: W0217 12:34:18.696089    1499 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:35:54.693120  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: E0217 12:34:18.696117    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:35:54.718356  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736039    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:35:54.718657  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736089    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:35:54.718869  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736151    1499 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-925274' and this object
	W0217 12:35:54.719206  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736164    1499 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	I0217 12:35:54.756891  861156 logs.go:123] Gathering logs for kube-apiserver [4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb] ...
	I0217 12:35:54.756978  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb"
	I0217 12:35:54.826029  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:54.844913  861156 logs.go:123] Gathering logs for kube-proxy [0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557] ...
	I0217 12:35:54.844947  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557"
	I0217 12:35:54.862154  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:54.891891  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:54.950867  861156 logs.go:123] Gathering logs for CRI-O ...
	I0217 12:35:54.950901  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0217 12:35:55.055782  861156 logs.go:123] Gathering logs for container status ...
	I0217 12:35:55.055844  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0217 12:35:55.182384  861156 out.go:358] Setting ErrFile to fd 2...
	I0217 12:35:55.182593  861156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0217 12:35:55.182694  861156 out.go:270] X Problems detected in kubelet:
	W0217 12:35:55.182742  861156 out.go:270]   Feb 17 12:34:18 addons-925274 kubelet[1499]: E0217 12:34:18.696117    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:35:55.182910  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736039    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:35:55.182956  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736089    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:35:55.182994  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736151    1499 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-925274' and this object
	W0217 12:35:55.183048  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736164    1499 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	I0217 12:35:55.183085  861156 out.go:358] Setting ErrFile to fd 2...
	I0217 12:35:55.183108  861156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:35:55.327537  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:55.359074  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:55.392459  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:55.827526  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:55.859293  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:55.894758  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:56.325304  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:56.359466  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:56.392373  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:56.826485  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:56.859616  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:56.892255  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:57.327990  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:57.360880  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:57.393052  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:57.826139  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:57.858118  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:57.891059  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:58.327002  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:58.359785  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:58.391389  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:58.826258  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:58.859526  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:58.893529  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:59.327384  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:59.360510  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:59.391977  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:35:59.825146  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:35:59.858817  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:35:59.891879  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:00.338505  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:00.360322  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:00.393416  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:00.826465  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:00.859565  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:00.892265  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:01.329584  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:01.359815  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:01.392220  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:01.826178  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:01.860746  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:01.893261  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:02.334919  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:02.359585  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:02.391925  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:02.831523  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:02.864585  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:02.898615  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:03.326537  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:03.359042  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:03.391685  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:03.825787  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:03.861040  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:03.891779  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:04.326660  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:04.359057  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:04.391452  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:04.829621  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:04.861947  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:04.893316  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:05.183491  861156 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0217 12:36:05.200961  861156 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0217 12:36:05.204149  861156 api_server.go:141] control plane version: v1.32.1
	I0217 12:36:05.204179  861156 api_server.go:131] duration metric: took 11.356619306s to wait for apiserver health ...
	I0217 12:36:05.204200  861156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0217 12:36:05.204224  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0217 12:36:05.204293  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0217 12:36:05.284203  861156 cri.go:89] found id: "4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb"
	I0217 12:36:05.284230  861156 cri.go:89] found id: ""
	I0217 12:36:05.284239  861156 logs.go:282] 1 containers: [4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb]
	I0217 12:36:05.284298  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:36:05.299599  861156 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0217 12:36:05.299684  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0217 12:36:05.326378  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:05.358327  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:05.377560  861156 cri.go:89] found id: "103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3"
	I0217 12:36:05.377585  861156 cri.go:89] found id: ""
	I0217 12:36:05.377593  861156 logs.go:282] 1 containers: [103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3]
	I0217 12:36:05.377657  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:36:05.382123  861156 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0217 12:36:05.382198  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0217 12:36:05.391194  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:05.446720  861156 cri.go:89] found id: "0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99"
	I0217 12:36:05.446741  861156 cri.go:89] found id: ""
	I0217 12:36:05.446749  861156 logs.go:282] 1 containers: [0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99]
	I0217 12:36:05.446804  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:36:05.451434  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0217 12:36:05.451506  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0217 12:36:05.510912  861156 cri.go:89] found id: "b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d"
	I0217 12:36:05.510935  861156 cri.go:89] found id: ""
	I0217 12:36:05.510942  861156 logs.go:282] 1 containers: [b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d]
	I0217 12:36:05.511001  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:36:05.515259  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0217 12:36:05.515332  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0217 12:36:05.564762  861156 cri.go:89] found id: "0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557"
	I0217 12:36:05.564786  861156 cri.go:89] found id: ""
	I0217 12:36:05.564794  861156 logs.go:282] 1 containers: [0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557]
	I0217 12:36:05.564850  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:36:05.569684  861156 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0217 12:36:05.569753  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0217 12:36:05.632382  861156 cri.go:89] found id: "917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea"
	I0217 12:36:05.632407  861156 cri.go:89] found id: ""
	I0217 12:36:05.632416  861156 logs.go:282] 1 containers: [917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea]
	I0217 12:36:05.632469  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:36:05.638306  861156 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0217 12:36:05.638378  861156 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0217 12:36:05.695730  861156 cri.go:89] found id: "66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600"
	I0217 12:36:05.695754  861156 cri.go:89] found id: ""
	I0217 12:36:05.695763  861156 logs.go:282] 1 containers: [66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600]
	I0217 12:36:05.695817  861156 ssh_runner.go:195] Run: which crictl
	I0217 12:36:05.701685  861156 logs.go:123] Gathering logs for dmesg ...
	I0217 12:36:05.701709  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0217 12:36:05.725394  861156 logs.go:123] Gathering logs for describe nodes ...
	I0217 12:36:05.725425  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0217 12:36:05.840663  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:05.859283  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:05.885482  861156 logs.go:123] Gathering logs for kube-apiserver [4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb] ...
	I0217 12:36:05.885523  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb"
	I0217 12:36:05.893084  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:05.964575  861156 logs.go:123] Gathering logs for etcd [103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3] ...
	I0217 12:36:05.964656  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3"
	I0217 12:36:06.051210  861156 logs.go:123] Gathering logs for coredns [0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99] ...
	I0217 12:36:06.052377  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99"
	I0217 12:36:06.160093  861156 logs.go:123] Gathering logs for kindnet [66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600] ...
	I0217 12:36:06.160190  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600"
	I0217 12:36:06.213969  861156 logs.go:123] Gathering logs for kubelet ...
	I0217 12:36:06.214234  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0217 12:36:06.277361  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: W0217 12:34:18.617292    1499 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:36:06.277657  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: E0217 12:34:18.617349    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:36:06.278898  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: I0217 12:34:18.695991    1499 status_manager.go:890] "Failed to get status for pod" podUID="ce4fa4a1-1905-49a7-80c9-488708fea328" pod="kube-system/kube-proxy-9mkwr" err="pods \"kube-proxy-9mkwr\" is forbidden: User \"system:node:addons-925274\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object"
	W0217 12:36:06.279100  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: W0217 12:34:18.696089    1499 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:36:06.279345  861156 logs.go:138] Found kubelet problem: Feb 17 12:34:18 addons-925274 kubelet[1499]: E0217 12:34:18.696117    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:36:06.303396  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736039    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:36:06.303800  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736089    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:36:06.304038  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736151    1499 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-925274' and this object
	W0217 12:36:06.304285  861156 logs.go:138] Found kubelet problem: Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736164    1499 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	I0217 12:36:06.343588  861156 logs.go:123] Gathering logs for kube-scheduler [b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d] ...
	I0217 12:36:06.344091  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d"
	I0217 12:36:06.344049  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:06.359101  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:06.399981  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:06.430515  861156 logs.go:123] Gathering logs for kube-proxy [0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557] ...
	I0217 12:36:06.430598  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557"
	I0217 12:36:06.511059  861156 logs.go:123] Gathering logs for kube-controller-manager [917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea] ...
	I0217 12:36:06.511087  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea"
	I0217 12:36:06.611871  861156 logs.go:123] Gathering logs for CRI-O ...
	I0217 12:36:06.611986  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0217 12:36:06.715924  861156 logs.go:123] Gathering logs for container status ...
	I0217 12:36:06.715955  861156 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0217 12:36:06.825232  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:06.857553  861156 out.go:358] Setting ErrFile to fd 2...
	I0217 12:36:06.857615  861156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0217 12:36:06.857702  861156 out.go:270] X Problems detected in kubelet:
	W0217 12:36:06.857741  861156 out.go:270]   Feb 17 12:34:18 addons-925274 kubelet[1499]: E0217 12:34:18.696117    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:36:06.857896  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736039    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-925274' and this object
	W0217 12:36:06.857932  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736089    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	W0217 12:36:06.857984  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: W0217 12:35:03.736151    1499 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-925274" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-925274' and this object
	W0217 12:36:06.858020  861156 out.go:270]   Feb 17 12:35:03 addons-925274 kubelet[1499]: E0217 12:35:03.736164    1499 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-925274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-925274' and this object" logger="UnhandledError"
	I0217 12:36:06.858062  861156 out.go:358] Setting ErrFile to fd 2...
	I0217 12:36:06.858090  861156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:36:06.863390  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:06.891635  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:07.327975  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:07.364610  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:07.391858  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:07.825424  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:07.858660  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:07.892308  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:08.326169  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:08.358011  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:08.391140  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:08.825507  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:08.926156  861156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0217 12:36:08.926170  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:09.326098  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:09.359014  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:09.392202  861156 kapi.go:107] duration metric: took 1m45.504279849s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0217 12:36:09.825025  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:09.859065  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:10.326240  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:10.358361  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:10.825036  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:10.859310  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:11.327515  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:11.366856  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:11.826118  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:11.860124  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:12.326335  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:12.359404  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:12.826068  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:12.859201  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:13.335282  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:13.359122  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:13.826199  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0217 12:36:13.861364  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:14.326038  861156 kapi.go:107] duration metric: took 1m46.503870924s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0217 12:36:14.329055  861156 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-925274 cluster.
	I0217 12:36:14.331910  861156 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0217 12:36:14.334704  861156 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0217 12:36:14.358816  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:14.860450  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:15.360044  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:15.859287  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:16.358480  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:16.864298  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:16.865297  861156 system_pods.go:59] 18 kube-system pods found
	I0217 12:36:16.865379  861156 system_pods.go:61] "coredns-668d6bf9bc-rvcxj" [da7ca2f4-df86-428f-abf5-d29a4ace81ae] Running
	I0217 12:36:16.865402  861156 system_pods.go:61] "csi-hostpath-attacher-0" [440a9d83-5fc6-44cd-8083-48b26a7ce201] Running
	I0217 12:36:16.865491  861156 system_pods.go:61] "csi-hostpath-resizer-0" [7cca9efd-6265-46d6-a61a-cd6101ff2fb1] Running
	I0217 12:36:16.865522  861156 system_pods.go:61] "csi-hostpathplugin-hqhk6" [cfd14aca-d1a5-43e0-864b-fa771487f044] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0217 12:36:16.865693  861156 system_pods.go:61] "etcd-addons-925274" [92d83d2f-4d04-4b7b-9e9c-4204b9554e4d] Running
	I0217 12:36:16.865728  861156 system_pods.go:61] "kindnet-ngx9r" [31c23aa9-156a-4f89-92e9-71773bdb8d00] Running
	I0217 12:36:16.865748  861156 system_pods.go:61] "kube-apiserver-addons-925274" [77c16c05-48d2-497f-8c54-85bd14c3ff6a] Running
	I0217 12:36:16.865770  861156 system_pods.go:61] "kube-controller-manager-addons-925274" [a689e992-7cb4-40b7-886b-caf5fb64969c] Running
	I0217 12:36:16.865807  861156 system_pods.go:61] "kube-ingress-dns-minikube" [7bd144d1-49f7-4ad3-b78b-5585173c3466] Running
	I0217 12:36:16.865834  861156 system_pods.go:61] "kube-proxy-9mkwr" [ce4fa4a1-1905-49a7-80c9-488708fea328] Running
	I0217 12:36:16.865855  861156 system_pods.go:61] "kube-scheduler-addons-925274" [95c168f5-fa0a-4804-a68e-4f171b9e6ec6] Running
	I0217 12:36:16.865877  861156 system_pods.go:61] "metrics-server-7fbb699795-m6442" [1d84bb16-0b65-4275-8143-4a1f4c61003f] Running
	I0217 12:36:16.865909  861156 system_pods.go:61] "nvidia-device-plugin-daemonset-s4mtb" [b1b07897-040f-443d-8dcc-0aea2b69a387] Running
	I0217 12:36:16.865931  861156 system_pods.go:61] "registry-6c88467877-gzrk4" [bc98d5d6-113a-4202-814b-28cac3908f75] Running
	I0217 12:36:16.865950  861156 system_pods.go:61] "registry-proxy-f8jhn" [db9d1473-4392-40d5-9a55-91cf8512c525] Running
	I0217 12:36:16.865972  861156 system_pods.go:61] "snapshot-controller-68b874b76f-9jz8c" [4044c429-a3c1-4d22-b302-cd8c399b1e97] Running
	I0217 12:36:16.865992  861156 system_pods.go:61] "snapshot-controller-68b874b76f-dnzt2" [4c63c9b8-3447-491f-9b59-74d926a5809c] Running
	I0217 12:36:16.866026  861156 system_pods.go:61] "storage-provisioner" [76ea267c-04e0-41c7-9d68-6fda0413c618] Running
	I0217 12:36:16.866054  861156 system_pods.go:74] duration metric: took 11.661847008s to wait for pod list to return data ...
	I0217 12:36:16.866075  861156 default_sa.go:34] waiting for default service account to be created ...
	I0217 12:36:16.873165  861156 default_sa.go:45] found service account: "default"
	I0217 12:36:16.873189  861156 default_sa.go:55] duration metric: took 7.092284ms for default service account to be created ...
	I0217 12:36:16.873205  861156 system_pods.go:116] waiting for k8s-apps to be running ...
	I0217 12:36:16.881081  861156 system_pods.go:86] 18 kube-system pods found
	I0217 12:36:16.881160  861156 system_pods.go:89] "coredns-668d6bf9bc-rvcxj" [da7ca2f4-df86-428f-abf5-d29a4ace81ae] Running
	I0217 12:36:16.881185  861156 system_pods.go:89] "csi-hostpath-attacher-0" [440a9d83-5fc6-44cd-8083-48b26a7ce201] Running
	I0217 12:36:16.881208  861156 system_pods.go:89] "csi-hostpath-resizer-0" [7cca9efd-6265-46d6-a61a-cd6101ff2fb1] Running
	I0217 12:36:16.881252  861156 system_pods.go:89] "csi-hostpathplugin-hqhk6" [cfd14aca-d1a5-43e0-864b-fa771487f044] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0217 12:36:16.881278  861156 system_pods.go:89] "etcd-addons-925274" [92d83d2f-4d04-4b7b-9e9c-4204b9554e4d] Running
	I0217 12:36:16.881310  861156 system_pods.go:89] "kindnet-ngx9r" [31c23aa9-156a-4f89-92e9-71773bdb8d00] Running
	I0217 12:36:16.881330  861156 system_pods.go:89] "kube-apiserver-addons-925274" [77c16c05-48d2-497f-8c54-85bd14c3ff6a] Running
	I0217 12:36:16.881361  861156 system_pods.go:89] "kube-controller-manager-addons-925274" [a689e992-7cb4-40b7-886b-caf5fb64969c] Running
	I0217 12:36:16.881390  861156 system_pods.go:89] "kube-ingress-dns-minikube" [7bd144d1-49f7-4ad3-b78b-5585173c3466] Running
	I0217 12:36:16.881412  861156 system_pods.go:89] "kube-proxy-9mkwr" [ce4fa4a1-1905-49a7-80c9-488708fea328] Running
	I0217 12:36:16.881434  861156 system_pods.go:89] "kube-scheduler-addons-925274" [95c168f5-fa0a-4804-a68e-4f171b9e6ec6] Running
	I0217 12:36:16.881467  861156 system_pods.go:89] "metrics-server-7fbb699795-m6442" [1d84bb16-0b65-4275-8143-4a1f4c61003f] Running
	I0217 12:36:16.881493  861156 system_pods.go:89] "nvidia-device-plugin-daemonset-s4mtb" [b1b07897-040f-443d-8dcc-0aea2b69a387] Running
	I0217 12:36:16.881514  861156 system_pods.go:89] "registry-6c88467877-gzrk4" [bc98d5d6-113a-4202-814b-28cac3908f75] Running
	I0217 12:36:16.881535  861156 system_pods.go:89] "registry-proxy-f8jhn" [db9d1473-4392-40d5-9a55-91cf8512c525] Running
	I0217 12:36:16.881555  861156 system_pods.go:89] "snapshot-controller-68b874b76f-9jz8c" [4044c429-a3c1-4d22-b302-cd8c399b1e97] Running
	I0217 12:36:16.881590  861156 system_pods.go:89] "snapshot-controller-68b874b76f-dnzt2" [4c63c9b8-3447-491f-9b59-74d926a5809c] Running
	I0217 12:36:16.881608  861156 system_pods.go:89] "storage-provisioner" [76ea267c-04e0-41c7-9d68-6fda0413c618] Running
	I0217 12:36:16.881631  861156 system_pods.go:126] duration metric: took 8.418963ms to wait for k8s-apps to be running ...
	I0217 12:36:16.881672  861156 system_svc.go:44] waiting for kubelet service to be running ....
	I0217 12:36:16.881751  861156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:36:16.916220  861156 system_svc.go:56] duration metric: took 34.55835ms WaitForService to wait for kubelet
	I0217 12:36:16.916303  861156 kubeadm.go:582] duration metric: took 2m0.421203323s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0217 12:36:16.916337  861156 node_conditions.go:102] verifying NodePressure condition ...
	I0217 12:36:16.919714  861156 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0217 12:36:16.919787  861156 node_conditions.go:123] node cpu capacity is 2
	I0217 12:36:16.919813  861156 node_conditions.go:105] duration metric: took 3.440651ms to run NodePressure ...
	I0217 12:36:16.919901  861156 start.go:241] waiting for startup goroutines ...
	I0217 12:36:17.359896  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:17.863532  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:18.360067  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:18.859476  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:19.361209  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:19.858769  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:20.359146  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:20.858400  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:21.360358  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:21.859493  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:22.360230  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:22.859072  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:23.358990  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:23.858438  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:24.358651  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:24.859734  861156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0217 12:36:25.361921  861156 kapi.go:107] duration metric: took 2m1.006730442s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0217 12:36:25.365113  861156 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, inspektor-gadget, storage-provisioner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0217 12:36:25.368885  861156 addons.go:514] duration metric: took 2m8.873545243s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner inspektor-gadget storage-provisioner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0217 12:36:25.368944  861156 start.go:246] waiting for cluster config update ...
	I0217 12:36:25.368967  861156 start.go:255] writing updated cluster config ...
	I0217 12:36:25.369293  861156 ssh_runner.go:195] Run: rm -f paused
	I0217 12:36:25.769664  861156 start.go:600] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0217 12:36:25.772958  861156 out.go:177] * Done! kubectl is now configured to use "addons-925274" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 17 12:38:13 addons-925274 crio[977]: time="2025-02-17 12:38:13.056740453Z" level=info msg="Removed pod sandbox: f2938feec05cb0b8e6155d88d5ff2891c9c18e4668e34acddf6de403fefbf43f" id=472cdcba-4503-4cfd-abf6-87db028eda32 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.035405422Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-v2cwh/POD" id=6cd0e18f-1fc6-4483-a6b4-53ab99826025 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.035467549Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.072267380Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-v2cwh Namespace:default ID:8c3e9fa42714516c4dbb4694fb97832570ce641770861fe8a4ac6a8b5f04b577 UID:b6f8235f-ba44-456f-91bb-317877a5a2fc NetNS:/var/run/netns/c1d635f9-1693-44db-903d-ef003e297d79 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.072306182Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-v2cwh to CNI network \"kindnet\" (type=ptp)"
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.122835158Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-v2cwh Namespace:default ID:8c3e9fa42714516c4dbb4694fb97832570ce641770861fe8a4ac6a8b5f04b577 UID:b6f8235f-ba44-456f-91bb-317877a5a2fc NetNS:/var/run/netns/c1d635f9-1693-44db-903d-ef003e297d79 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.122982059Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-v2cwh for CNI network kindnet (type=ptp)"
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.125599314Z" level=info msg="Ran pod sandbox 8c3e9fa42714516c4dbb4694fb97832570ce641770861fe8a4ac6a8b5f04b577 with infra container: default/hello-world-app-7d9564db4-v2cwh/POD" id=6cd0e18f-1fc6-4483-a6b4-53ab99826025 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.129454995Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ea3b7930-f8c9-4edb-b696-f7958ed003f4 name=/runtime.v1.ImageService/ImageStatus
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.130021725Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ea3b7930-f8c9-4edb-b696-f7958ed003f4 name=/runtime.v1.ImageService/ImageStatus
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.131297910Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=2f4bf5e2-03db-41d7-960c-7643bd33cfa4 name=/runtime.v1.ImageService/PullImage
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.134091546Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 17 12:39:59 addons-925274 crio[977]: time="2025-02-17 12:39:59.379684219Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.529021645Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=2f4bf5e2-03db-41d7-960c-7643bd33cfa4 name=/runtime.v1.ImageService/PullImage
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.529681895Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e9d872fc-8f59-430d-bade-c4c04bb88c82 name=/runtime.v1.ImageService/ImageStatus
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.530525615Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e9d872fc-8f59-430d-bade-c4c04bb88c82 name=/runtime.v1.ImageService/ImageStatus
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.531957940Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c17d4a71-e69f-469f-8806-43c8ca95c105 name=/runtime.v1.ImageService/ImageStatus
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.532845942Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c17d4a71-e69f-469f-8806-43c8ca95c105 name=/runtime.v1.ImageService/ImageStatus
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.534676944Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-v2cwh/hello-world-app" id=ca2f1972-bf69-4454-a070-55933d310974 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.534823287Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.562416106Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cf12c0a99412f16bdfb7f65d79c47576b99f0ad93c6ae2b11220618143d665e8/merged/etc/passwd: no such file or directory"
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.562640699Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cf12c0a99412f16bdfb7f65d79c47576b99f0ad93c6ae2b11220618143d665e8/merged/etc/group: no such file or directory"
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.623096202Z" level=info msg="Created container c7925ab06505145a699019b16aef94b6af0bfcd9d62647192454fb4622ef8736: default/hello-world-app-7d9564db4-v2cwh/hello-world-app" id=ca2f1972-bf69-4454-a070-55933d310974 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.624155630Z" level=info msg="Starting container: c7925ab06505145a699019b16aef94b6af0bfcd9d62647192454fb4622ef8736" id=af58dbb9-e02d-4a7b-8e43-da5941f61d95 name=/runtime.v1.RuntimeService/StartContainer
	Feb 17 12:40:00 addons-925274 crio[977]: time="2025-02-17 12:40:00.645138007Z" level=info msg="Started container" PID=8663 containerID=c7925ab06505145a699019b16aef94b6af0bfcd9d62647192454fb4622ef8736 description=default/hello-world-app-7d9564db4-v2cwh/hello-world-app id=af58dbb9-e02d-4a7b-8e43-da5941f61d95 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c3e9fa42714516c4dbb4694fb97832570ce641770861fe8a4ac6a8b5f04b577
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	c7925ab065051       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   8c3e9fa427145       hello-world-app-7d9564db4-v2cwh
	26c981d68c692       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago            Running             nginx                     0                   4b9468c131221       nginx
	5040fd21a5171       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   1e8db90680808       busybox
	498718c077ce8       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             3 minutes ago            Running             controller                0                   103adf24361e1       ingress-nginx-controller-56d7c84fd4-27k5h
	f3fca99ff1f2d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   4 minutes ago            Exited              patch                     0                   7fb5c44e375b5       ingress-nginx-admission-patch-wkcqb
	d9753ecf4ebfa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   4 minutes ago            Exited              create                    0                   c5120418e2fcb       ingress-nginx-admission-create-wn6xk
	3193db6f09ee4       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             4 minutes ago            Running             minikube-ingress-dns      0                   cb77c60ce55e3       kube-ingress-dns-minikube
	5746dabf168c2       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             4 minutes ago            Running             local-path-provisioner    0                   1bfee4bf697e0       local-path-provisioner-76f89f99b5-tsjv4
	0796e96fc7e17       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             4 minutes ago            Running             coredns                   0                   bb70c89cb9f4f       coredns-668d6bf9bc-rvcxj
	425da241c061f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago            Running             storage-provisioner       0                   fd9008b1f9579       storage-provisioner
	66f71c28e4955       docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955                           5 minutes ago            Running             kindnet-cni               0                   672363c821cf4       kindnet-ngx9r
	0a0166bca5f7b       e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0                                                             5 minutes ago            Running             kube-proxy                0                   cc9aba67f45ca       kube-proxy-9mkwr
	103af99aee15b       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                             5 minutes ago            Running             etcd                      0                   ac00899efffef       etcd-addons-925274
	4a600018ee231       265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19                                                             5 minutes ago            Running             kube-apiserver            0                   3ab3310c4fffa       kube-apiserver-addons-925274
	b300eec80adb6       ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c                                                             5 minutes ago            Running             kube-scheduler            0                   0c6ebdf62da62       kube-scheduler-addons-925274
	917718be6eb87       2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13                                                             5 minutes ago            Running             kube-controller-manager   0                   ae07f73e74ef9       kube-controller-manager-addons-925274
	
	
	==> coredns [0796e96fc7e17bb77901def05f41d591911105950dad08587349aa3990b43a99] <==
	[INFO] 10.244.0.11:49440 - 27526 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004557916s
	[INFO] 10.244.0.11:49440 - 38530 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000181075s
	[INFO] 10.244.0.11:49440 - 14143 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000665313s
	[INFO] 10.244.0.11:38146 - 33104 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000172s
	[INFO] 10.244.0.11:38146 - 32874 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00025857s
	[INFO] 10.244.0.11:51084 - 8888 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095554s
	[INFO] 10.244.0.11:51084 - 8683 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000179064s
	[INFO] 10.244.0.11:48280 - 10141 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010513s
	[INFO] 10.244.0.11:48280 - 9935 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000187868s
	[INFO] 10.244.0.11:42760 - 57385 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005022643s
	[INFO] 10.244.0.11:42760 - 57813 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005106636s
	[INFO] 10.244.0.11:36606 - 15026 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124854s
	[INFO] 10.244.0.11:36606 - 15165 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000199076s
	[INFO] 10.244.0.21:47062 - 45189 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000191946s
	[INFO] 10.244.0.21:53896 - 17060 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000107903s
	[INFO] 10.244.0.21:37672 - 20507 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000187811s
	[INFO] 10.244.0.21:44202 - 14969 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108042s
	[INFO] 10.244.0.21:41267 - 25425 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136834s
	[INFO] 10.244.0.21:46899 - 44963 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009539s
	[INFO] 10.244.0.21:48203 - 24804 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002151863s
	[INFO] 10.244.0.21:41480 - 408 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00196882s
	[INFO] 10.244.0.21:53753 - 22481 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001440686s
	[INFO] 10.244.0.21:35151 - 8665 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001954839s
	[INFO] 10.244.0.24:44415 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000172976s
	[INFO] 10.244.0.24:51549 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092166s
	
	
	==> describe nodes <==
	Name:               addons-925274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-925274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d5460083481c20438a5263486cb626e4191c2126
	                    minikube.k8s.io/name=addons-925274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_17T12_34_13_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-925274
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Feb 2025 12:34:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-925274
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Feb 2025 12:39:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Feb 2025 12:38:17 +0000   Mon, 17 Feb 2025 12:34:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Feb 2025 12:38:17 +0000   Mon, 17 Feb 2025 12:34:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Feb 2025 12:38:17 +0000   Mon, 17 Feb 2025 12:34:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Feb 2025 12:38:17 +0000   Mon, 17 Feb 2025 12:35:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-925274
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc14a2fcd7a141468cad1ef417c09dba
	  System UUID:                1ee305c8-d5fe-48df-bc7d-4c3501ef5f7a
	  Boot ID:                    38d93fbd-21a1-4ed0-a814-3afd0e57bcab
	  Kernel Version:             5.15.0-1077-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  default                     hello-world-app-7d9564db4-v2cwh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-27k5h    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m38s
	  kube-system                 coredns-668d6bf9bc-rvcxj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m45s
	  kube-system                 etcd-addons-925274                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m50s
	  kube-system                 kindnet-ngx9r                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m44s
	  kube-system                 kube-apiserver-addons-925274                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-controller-manager-addons-925274        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-proxy-9mkwr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-scheduler-addons-925274                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  local-path-storage          local-path-provisioner-76f89f99b5-tsjv4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m38s                  kube-proxy       
	  Normal   Starting                 5m56s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m56s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet          Node addons-925274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet          Node addons-925274 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m56s (x8 over 5m56s)  kubelet          Node addons-925274 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m49s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m49s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m49s                  kubelet          Node addons-925274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m49s                  kubelet          Node addons-925274 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m49s                  kubelet          Node addons-925274 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m46s                  node-controller  Node addons-925274 event: Registered Node addons-925274 in Controller
	  Normal   NodeReady                4m58s                  kubelet          Node addons-925274 status is now: NodeReady
	
	
	==> dmesg <==
	[Feb17 12:04] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [103af99aee15b2696a4c9a8d591681e65fa0d34be1a8612c25a6a5ad6aee05f3] <==
	{"level":"warn","ts":"2025-02-17T12:34:19.700108Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T12:34:19.386040Z","time spent":"314.061919ms","remote":"127.0.0.1:54012","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":29,"request content":"key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" limit:1 "}
	{"level":"warn","ts":"2025-02-17T12:34:19.719482Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.45623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2025-02-17T12:34:19.719563Z","caller":"traceutil/trace.go:171","msg":"trace[1268293769] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:375; }","duration":"333.539977ms","start":"2025-02-17T12:34:19.385998Z","end":"2025-02-17T12:34:19.719538Z","steps":["trace[1268293769] 'agreement among raft nodes before linearized reading'  (duration: 333.388753ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T12:34:19.719602Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T12:34:19.385987Z","time spent":"333.607019ms","remote":"127.0.0.1:54012","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":2902,"request content":"key:\"/registry/daemonsets/kube-system/kube-proxy\" limit:1 "}
	{"level":"warn","ts":"2025-02-17T12:34:19.719723Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.737979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-17T12:34:19.719746Z","caller":"traceutil/trace.go:171","msg":"trace[1947820] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:375; }","duration":"333.767114ms","start":"2025-02-17T12:34:19.385972Z","end":"2025-02-17T12:34:19.719739Z","steps":["trace[1947820] 'agreement among raft nodes before linearized reading'  (duration: 333.724818ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T12:34:19.719778Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T12:34:19.385949Z","time spent":"333.823007ms","remote":"127.0.0.1:53756","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts\" limit:1 "}
	{"level":"warn","ts":"2025-02-17T12:34:19.681788Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.18689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-02-17T12:34:19.912460Z","caller":"traceutil/trace.go:171","msg":"trace[1400305324] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:375; }","duration":"454.869396ms","start":"2025-02-17T12:34:19.457571Z","end":"2025-02-17T12:34:19.912440Z","steps":["trace[1400305324] 'agreement among raft nodes before linearized reading'  (duration: 216.61643ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T12:34:19.944448Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T12:34:19.457526Z","time spent":"486.89882ms","remote":"127.0.0.1:53670","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":375,"request content":"key:\"/registry/namespaces/kube-system\" limit:1 "}
	{"level":"warn","ts":"2025-02-17T12:34:20.040078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.06416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-17T12:34:20.040386Z","caller":"traceutil/trace.go:171","msg":"trace[799993332] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"256.066147ms","start":"2025-02-17T12:34:19.784306Z","end":"2025-02-17T12:34:20.040372Z","steps":["trace[799993332] 'process raft request'  (duration: 255.969477ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-17T12:34:20.068660Z","caller":"traceutil/trace.go:171","msg":"trace[1159426336] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:375; }","duration":"284.481738ms","start":"2025-02-17T12:34:19.784151Z","end":"2025-02-17T12:34:20.068633Z","steps":["trace[1159426336] 'agreement among raft nodes before linearized reading'  (duration: 64.839488ms)","trace[1159426336] 'range keys from in-memory index tree'  (duration: 128.20758ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-17T12:34:20.077740Z","caller":"traceutil/trace.go:171","msg":"trace[1073899474] linearizableReadLoop","detail":"{readStateIndex:388; appliedIndex:386; }","duration":"228.946697ms","start":"2025-02-17T12:34:19.848778Z","end":"2025-02-17T12:34:20.077725Z","steps":["trace[1073899474] 'read index received'  (duration: 191.184609ms)","trace[1073899474] 'applied index is now lower than readState.Index'  (duration: 37.761505ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-17T12:34:20.077812Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.555346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-17T12:34:20.079494Z","caller":"traceutil/trace.go:171","msg":"trace[1358597768] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:379; }","duration":"231.249088ms","start":"2025-02-17T12:34:19.848231Z","end":"2025-02-17T12:34:20.079480Z","steps":["trace[1358597768] 'agreement among raft nodes before linearized reading'  (duration: 229.533004ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-17T12:34:20.077960Z","caller":"traceutil/trace.go:171","msg":"trace[225279850] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"229.220552ms","start":"2025-02-17T12:34:19.848714Z","end":"2025-02-17T12:34:20.077935Z","steps":["trace[225279850] 'process raft request'  (duration: 228.858453ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-17T12:34:20.077982Z","caller":"traceutil/trace.go:171","msg":"trace[768946312] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"190.5137ms","start":"2025-02-17T12:34:19.887464Z","end":"2025-02-17T12:34:20.077978Z","steps":["trace[768946312] 'process raft request'  (duration: 190.202143ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-17T12:34:20.078065Z","caller":"traceutil/trace.go:171","msg":"trace[157887336] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"190.429707ms","start":"2025-02-17T12:34:19.887630Z","end":"2025-02-17T12:34:20.078059Z","steps":["trace[157887336] 'process raft request'  (duration: 190.06613ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T12:34:20.078098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.509021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-17T12:34:20.079967Z","caller":"traceutil/trace.go:171","msg":"trace[2042955446] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:379; }","duration":"231.378653ms","start":"2025-02-17T12:34:19.848579Z","end":"2025-02-17T12:34:20.079958Z","steps":["trace[2042955446] 'agreement among raft nodes before linearized reading'  (duration: 229.497289ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T12:34:20.112188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.765661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-925274\" limit:1 ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2025-02-17T12:34:20.112339Z","caller":"traceutil/trace.go:171","msg":"trace[239848928] range","detail":"{range_begin:/registry/minions/addons-925274; range_end:; response_count:1; response_revision:380; }","duration":"167.924303ms","start":"2025-02-17T12:34:19.944400Z","end":"2025-02-17T12:34:20.112325Z","steps":["trace[239848928] 'agreement among raft nodes before linearized reading'  (duration: 137.262328ms)","trace[239848928] 'assemble the response'  (duration: 30.474731ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-17T12:34:20.616479Z","caller":"traceutil/trace.go:171","msg":"trace[1018014221] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"112.081781ms","start":"2025-02-17T12:34:20.504381Z","end":"2025-02-17T12:34:20.616463Z","steps":["trace[1018014221] 'process raft request'  (duration: 93.916001ms)","trace[1018014221] 'compare'  (duration: 17.844607ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-17T12:34:20.616804Z","caller":"traceutil/trace.go:171","msg":"trace[1050510909] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"112.353619ms","start":"2025-02-17T12:34:20.504441Z","end":"2025-02-17T12:34:20.616794Z","steps":["trace[1050510909] 'process raft request'  (duration: 111.801066ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:40:01 up  5:22,  0 users,  load average: 0.42, 1.89, 2.40
	Linux addons-925274 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [66f71c28e4955af0e9f4cdb067799c6427a1b80cc38cd54fbee6f35aaf1f7600] <==
	I0217 12:37:53.475954       1 main.go:301] handling current node
	I0217 12:38:03.473293       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:38:03.473329       1 main.go:301] handling current node
	I0217 12:38:13.473262       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:38:13.473302       1 main.go:301] handling current node
	I0217 12:38:23.472890       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:38:23.472924       1 main.go:301] handling current node
	I0217 12:38:33.474086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:38:33.474193       1 main.go:301] handling current node
	I0217 12:38:43.476326       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:38:43.476439       1 main.go:301] handling current node
	I0217 12:38:53.478064       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:38:53.478097       1 main.go:301] handling current node
	I0217 12:39:03.478679       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:39:03.478714       1 main.go:301] handling current node
	I0217 12:39:13.480680       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:39:13.480714       1 main.go:301] handling current node
	I0217 12:39:23.472910       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:39:23.472944       1 main.go:301] handling current node
	I0217 12:39:33.475244       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:39:33.475279       1 main.go:301] handling current node
	I0217 12:39:43.481935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:39:43.482056       1 main.go:301] handling current node
	I0217 12:39:53.477415       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0217 12:39:53.477450       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4a600018ee2312d1adab0b3b0442ef473c7f2ba5cb64a2e131b33f3e96ef40eb] <==
	I0217 12:35:19.742694       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0217 12:36:37.874432       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34354: use of closed network connection
	E0217 12:36:38.146786       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34368: use of closed network connection
	E0217 12:36:38.294742       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34398: use of closed network connection
	I0217 12:36:47.697920       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.120.29"}
	I0217 12:37:32.202198       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0217 12:37:33.242049       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0217 12:37:37.800102       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0217 12:37:38.147480       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.120.21"}
	I0217 12:37:52.487633       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0217 12:38:10.850594       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0217 12:38:10.850646       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0217 12:38:10.884988       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0217 12:38:10.885046       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0217 12:38:10.909314       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0217 12:38:10.909367       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0217 12:38:10.962145       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0217 12:38:10.962273       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0217 12:38:11.000021       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0217 12:38:11.000150       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0217 12:38:11.962540       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0217 12:38:12.000605       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0217 12:38:12.089281       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0217 12:38:20.652532       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0217 12:39:58.996046       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.198.62"}
	
	
	==> kube-controller-manager [917718be6eb87a601d793709189a79e984095d43b60826d6ab6829ed637697ea] <==
	E0217 12:38:58.325144       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0217 12:38:58.326171       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0217 12:38:58.326250       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0217 12:39:18.647676       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0217 12:39:18.648759       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0217 12:39:18.649700       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0217 12:39:18.649738       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0217 12:39:21.391363       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0217 12:39:21.392372       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0217 12:39:21.393381       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0217 12:39:21.393415       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0217 12:39:33.878232       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0217 12:39:33.879332       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0217 12:39:33.880403       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0217 12:39:33.880439       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0217 12:39:56.709858       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0217 12:39:56.710892       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0217 12:39:56.711892       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0217 12:39:56.711928       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0217 12:39:58.750302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="49.937394ms"
	I0217 12:39:58.760462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="9.631321ms"
	I0217 12:39:58.761176       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="29.981µs"
	I0217 12:39:58.761248       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="15.983µs"
	I0217 12:40:00.892173       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="13.121465ms"
	I0217 12:40:00.892351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="54.292µs"
	
	
	==> kube-proxy [0a0166bca5f7bc2b91a5f76149c0bf76384b2b41f918181de25eaa8ae626c557] <==
	I0217 12:34:21.767236       1 server_linux.go:66] "Using iptables proxy"
	I0217 12:34:22.794223       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0217 12:34:22.794373       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0217 12:34:22.904604       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0217 12:34:22.904728       1 server_linux.go:170] "Using iptables Proxier"
	I0217 12:34:22.908732       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0217 12:34:22.911109       1 server.go:497] "Version info" version="v1.32.1"
	I0217 12:34:22.911200       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 12:34:22.920995       1 config.go:199] "Starting service config controller"
	I0217 12:34:22.932357       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0217 12:34:22.932479       1 config.go:329] "Starting node config controller"
	I0217 12:34:22.934102       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0217 12:34:22.933230       1 config.go:105] "Starting endpoint slice config controller"
	I0217 12:34:22.934180       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0217 12:34:23.036191       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0217 12:34:23.036386       1 shared_informer.go:320] Caches are synced for service config
	I0217 12:34:23.036399       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b300eec80adb62926e58cd378e939c31c00a1b7ca1038143259d2b184d8a952d] <==
	W0217 12:34:10.451950       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0217 12:34:10.451986       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.452028       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0217 12:34:10.452047       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.452082       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0217 12:34:10.452097       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.452131       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0217 12:34:10.452146       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.452220       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0217 12:34:10.452237       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.452325       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0217 12:34:10.452341       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.452371       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0217 12:34:10.452386       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.452417       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0217 12:34:10.452431       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.452472       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0217 12:34:10.452487       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.453573       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0217 12:34:10.453673       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.453749       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0217 12:34:10.453769       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0217 12:34:10.453855       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0217 12:34:10.453874       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0217 12:34:11.442825       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 17 12:39:12 addons-925274 kubelet[1499]: E0217 12:39:12.496056    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8f388f44e794e4f42cb6e25945cd4138f1f950b4d74f46e6812febf52c17135f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8f388f44e794e4f42cb6e25945cd4138f1f950b4d74f46e6812febf52c17135f/diff: no such file or directory, extraDiskErr: <nil>
	Feb 17 12:39:12 addons-925274 kubelet[1499]: E0217 12:39:12.663553    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739795952663320514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 17 12:39:12 addons-925274 kubelet[1499]: E0217 12:39:12.663590    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739795952663320514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 17 12:39:22 addons-925274 kubelet[1499]: E0217 12:39:22.666376    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739795962666159767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 17 12:39:22 addons-925274 kubelet[1499]: E0217 12:39:22.666417    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739795962666159767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 17 12:39:32 addons-925274 kubelet[1499]: E0217 12:39:32.669578    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739795972669280761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 17 12:39:32 addons-925274 kubelet[1499]: E0217 12:39:32.669625    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739795972669280761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 17 12:39:38 addons-925274 kubelet[1499]: E0217 12:39:38.974063    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c018c40ba40c317ee085535d29c57599db5274731c759d55180bb9fd71aec99a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c018c40ba40c317ee085535d29c57599db5274731c759d55180bb9fd71aec99a/diff: no such file or directory, extraDiskErr: <nil>
	Feb 17 12:39:42 addons-925274 kubelet[1499]: E0217 12:39:42.672795    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739795982672517468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 17 12:39:42 addons-925274 kubelet[1499]: E0217 12:39:42.672838    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739795982672517468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 17 12:39:48 addons-925274 kubelet[1499]: E0217 12:39:48.153990    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d5a3ca6b4750586ada7dadf9a00cb4fc9e2ac96e61799c2535205f253d2788ed/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d5a3ca6b4750586ada7dadf9a00cb4fc9e2ac96e61799c2535205f253d2788ed/diff: no such file or directory, extraDiskErr: <nil>
	Feb 17 12:39:52 addons-925274 kubelet[1499]: E0217 12:39:52.675266    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739795992675027738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 17 12:39:52 addons-925274 kubelet[1499]: E0217 12:39:52.675303    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739795992675027738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.733902    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="cfd14aca-d1a5-43e0-864b-fa771487f044" containerName="csi-external-health-monitor-controller"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.733947    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="cfd14aca-d1a5-43e0-864b-fa771487f044" containerName="hostpath"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.733956    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="cfd14aca-d1a5-43e0-864b-fa771487f044" containerName="csi-provisioner"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.733962    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="cfd14aca-d1a5-43e0-864b-fa771487f044" containerName="csi-snapshotter"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.733968    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="763e832b-be8f-482c-a314-0135b165ded3" containerName="task-pv-container"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.733974    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="440a9d83-5fc6-44cd-8083-48b26a7ce201" containerName="csi-attacher"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.733981    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="cfd14aca-d1a5-43e0-864b-fa771487f044" containerName="liveness-probe"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.733988    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="4c63c9b8-3447-491f-9b59-74d926a5809c" containerName="volume-snapshot-controller"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.733995    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="4044c429-a3c1-4d22-b302-cd8c399b1e97" containerName="volume-snapshot-controller"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.734000    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="7cca9efd-6265-46d6-a61a-cd6101ff2fb1" containerName="csi-resizer"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.734006    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="cfd14aca-d1a5-43e0-864b-fa771487f044" containerName="node-driver-registrar"
	Feb 17 12:39:58 addons-925274 kubelet[1499]: I0217 12:39:58.765029    1499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9gnr\" (UniqueName: \"kubernetes.io/projected/b6f8235f-ba44-456f-91bb-317877a5a2fc-kube-api-access-z9gnr\") pod \"hello-world-app-7d9564db4-v2cwh\" (UID: \"b6f8235f-ba44-456f-91bb-317877a5a2fc\") " pod="default/hello-world-app-7d9564db4-v2cwh"
	
	
	==> storage-provisioner [425da241c061f1893147bd164259e675645901ed0ff59bb08a7f1ea724153a78] <==
	I0217 12:35:04.548838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0217 12:35:04.601633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0217 12:35:04.601752       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0217 12:35:04.627339       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0217 12:35:04.627537       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"07768ed7-28ad-4bfb-98f3-de170233cb98", APIVersion:"v1", ResourceVersion:"927", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-925274_ece249a2-b97e-4db7-a37d-a8c5e8deec8a became leader
	I0217 12:35:04.628142       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-925274_ece249a2-b97e-4db7-a37d-a8c5e8deec8a!
	I0217 12:35:04.729242       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-925274_ece249a2-b97e-4db7-a37d-a8c5e8deec8a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-925274 -n addons-925274
helpers_test.go:261: (dbg) Run:  kubectl --context addons-925274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-wn6xk ingress-nginx-admission-patch-wkcqb
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-925274 describe pod ingress-nginx-admission-create-wn6xk ingress-nginx-admission-patch-wkcqb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-925274 describe pod ingress-nginx-admission-create-wn6xk ingress-nginx-admission-patch-wkcqb: exit status 1 (89.827247ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wn6xk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wkcqb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-925274 describe pod ingress-nginx-admission-create-wn6xk ingress-nginx-admission-patch-wkcqb: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-925274 addons disable ingress-dns --alsologtostderr -v=1: (1.376832118s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-925274 addons disable ingress --alsologtostderr -v=1: (7.786981525s)
--- FAIL: TestAddons/parallel/Ingress (154.24s)

                                                
                                    

Test pass (298/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.75
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.42
9 TestDownloadOnly/v1.20.0/DeleteAll 0.39
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.32.1/json-events 4.69
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.09
18 TestDownloadOnly/v1.32.1/DeleteAll 0.22
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 178.76
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/FakeCredentials 11.95
35 TestAddons/parallel/Registry 16.48
37 TestAddons/parallel/InspektorGadget 11.85
38 TestAddons/parallel/MetricsServer 5.83
40 TestAddons/parallel/CSI 57.28
41 TestAddons/parallel/Headlamp 17.88
42 TestAddons/parallel/CloudSpanner 5.6
43 TestAddons/parallel/LocalPath 9.46
44 TestAddons/parallel/NvidiaDevicePlugin 5.68
45 TestAddons/parallel/Yakd 12.01
47 TestAddons/StoppedEnableDisable 12.17
48 TestCertOptions 38.82
49 TestCertExpiration 245.81
51 TestForceSystemdFlag 43.58
52 TestForceSystemdEnv 44.41
58 TestErrorSpam/setup 32.06
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.1
61 TestErrorSpam/pause 1.78
62 TestErrorSpam/unpause 1.78
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 49.59
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 15.39
70 TestFunctional/serial/KubeContext 0.08
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.58
75 TestFunctional/serial/CacheCmd/cache/add_local 1.41
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 60.11
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.78
86 TestFunctional/serial/LogsFileCmd 1.78
87 TestFunctional/serial/InvalidService 4.96
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 12.14
91 TestFunctional/parallel/DryRun 0.48
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.04
97 TestFunctional/parallel/ServiceCmdConnect 11.75
98 TestFunctional/parallel/AddonsCmd 0.23
99 TestFunctional/parallel/PersistentVolumeClaim 26.06
101 TestFunctional/parallel/SSHCmd 0.7
102 TestFunctional/parallel/CpCmd 2.36
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 2.21
109 TestFunctional/parallel/NodeLabels 0.14
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
113 TestFunctional/parallel/License 0.26
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.49
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
129 TestFunctional/parallel/MountCmd/any-port 8.7
130 TestFunctional/parallel/ServiceCmd/List 0.69
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.39
135 TestFunctional/parallel/MountCmd/specific-port 2.17
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.59
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.35
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.68
144 TestFunctional/parallel/ImageCommands/Setup 0.75
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.41
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.92
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 177.17
163 TestMultiControlPlane/serial/DeployApp 8.56
164 TestMultiControlPlane/serial/PingHostFromPods 1.66
165 TestMultiControlPlane/serial/AddWorkerNode 37.11
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
168 TestMultiControlPlane/serial/CopyFile 19.19
169 TestMultiControlPlane/serial/StopSecondaryNode 12.71
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
171 TestMultiControlPlane/serial/RestartSecondaryNode 24.94
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 176.51
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.63
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 35.69
177 TestMultiControlPlane/serial/RestartCluster 97.66
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
179 TestMultiControlPlane/serial/AddSecondaryNode 78.44
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1
184 TestJSONOutput/start/Command 50.42
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.75
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.66
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.85
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.24
209 TestKicCustomNetwork/create_custom_network 36.3
210 TestKicCustomNetwork/use_default_bridge_network 33.78
211 TestKicExistingNetwork 30.3
212 TestKicCustomSubnet 35.85
213 TestKicStaticIP 34.19
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 71.2
218 TestMountStart/serial/StartWithMountFirst 9.09
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 9.79
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.64
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.2
225 TestMountStart/serial/RestartStopped 7.75
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 79.06
230 TestMultiNode/serial/DeployApp2Nodes 6.43
231 TestMultiNode/serial/PingHostFrom2Pods 1.06
232 TestMultiNode/serial/AddNode 33.14
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.7
235 TestMultiNode/serial/CopyFile 10.01
236 TestMultiNode/serial/StopNode 2.28
237 TestMultiNode/serial/StartAfterStop 10.05
238 TestMultiNode/serial/RestartKeepsNodes 89.97
239 TestMultiNode/serial/DeleteNode 5.33
240 TestMultiNode/serial/StopMultiNode 23.87
241 TestMultiNode/serial/RestartMultiNode 46.77
242 TestMultiNode/serial/ValidateNameConflict 33.28
247 TestPreload 125.55
249 TestScheduledStopUnix 106.44
252 TestInsufficientStorage 10.52
253 TestRunningBinaryUpgrade 77.78
255 TestKubernetesUpgrade 138
256 TestMissingContainerUpgrade 163.12
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
259 TestNoKubernetes/serial/StartWithK8s 39.19
260 TestNoKubernetes/serial/StartWithStopK8s 8.64
261 TestNoKubernetes/serial/Start 11.53
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
263 TestNoKubernetes/serial/ProfileList 1.22
264 TestNoKubernetes/serial/Stop 1.26
265 TestNoKubernetes/serial/StartNoArgs 7.43
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.44
267 TestStoppedBinaryUpgrade/Setup 0.59
268 TestStoppedBinaryUpgrade/Upgrade 82.47
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.38
278 TestPause/serial/Start 61.52
279 TestPause/serial/SecondStartNoReconfiguration 37.78
287 TestNetworkPlugins/group/false 5.01
291 TestPause/serial/Pause 0.95
292 TestPause/serial/VerifyStatus 0.33
293 TestPause/serial/Unpause 0.8
294 TestPause/serial/PauseAgain 1.59
295 TestPause/serial/DeletePaused 2.97
296 TestPause/serial/VerifyDeletedResources 0.53
298 TestStartStop/group/old-k8s-version/serial/FirstStart 162.29
299 TestStartStop/group/old-k8s-version/serial/DeployApp 10.78
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.69
302 TestStartStop/group/no-preload/serial/FirstStart 73.29
303 TestStartStop/group/old-k8s-version/serial/Stop 13.06
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
305 TestStartStop/group/old-k8s-version/serial/SecondStart 306.91
306 TestStartStop/group/no-preload/serial/DeployApp 264.45
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
310 TestStartStop/group/old-k8s-version/serial/Pause 3
311 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.43
313 TestStartStop/group/embed-certs/serial/FirstStart 55.43
314 TestStartStop/group/no-preload/serial/Stop 12.2
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
316 TestStartStop/group/no-preload/serial/SecondStart 303.21
317 TestStartStop/group/embed-certs/serial/DeployApp 10.36
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
319 TestStartStop/group/embed-certs/serial/Stop 11.96
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
321 TestStartStop/group/embed-certs/serial/SecondStart 274.05
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
325 TestStartStop/group/no-preload/serial/Pause 3.11
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.52
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
331 TestStartStop/group/embed-certs/serial/Pause 3.62
333 TestStartStop/group/newest-cni/serial/FirstStart 40.05
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.46
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.61
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.43
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
338 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 293.52
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.77
341 TestStartStop/group/newest-cni/serial/Stop 1.28
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
343 TestStartStop/group/newest-cni/serial/SecondStart 22.47
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
347 TestStartStop/group/newest-cni/serial/Pause 4.53
348 TestNetworkPlugins/group/auto/Start 50.05
349 TestNetworkPlugins/group/auto/KubeletFlags 0.29
350 TestNetworkPlugins/group/auto/NetCatPod 9.29
351 TestNetworkPlugins/group/auto/DNS 0.2
352 TestNetworkPlugins/group/auto/Localhost 0.17
353 TestNetworkPlugins/group/auto/HairPin 0.16
354 TestNetworkPlugins/group/kindnet/Start 53.53
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
357 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
358 TestNetworkPlugins/group/kindnet/DNS 0.21
359 TestNetworkPlugins/group/kindnet/Localhost 0.16
360 TestNetworkPlugins/group/kindnet/HairPin 0.16
361 TestNetworkPlugins/group/calico/Start 65.51
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.31
364 TestNetworkPlugins/group/calico/NetCatPod 13.26
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
366 TestNetworkPlugins/group/calico/DNS 0.2
367 TestNetworkPlugins/group/calico/Localhost 0.16
368 TestNetworkPlugins/group/calico/HairPin 0.17
369 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
370 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.39
371 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.33
372 TestNetworkPlugins/group/custom-flannel/Start 63.31
373 TestNetworkPlugins/group/enable-default-cni/Start 77.95
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
376 TestNetworkPlugins/group/custom-flannel/DNS 0.19
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.39
381 TestNetworkPlugins/group/flannel/Start 63.58
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
385 TestNetworkPlugins/group/bridge/Start 74.67
386 TestNetworkPlugins/group/flannel/ControllerPod 6
387 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
388 TestNetworkPlugins/group/flannel/NetCatPod 11.27
389 TestNetworkPlugins/group/flannel/DNS 0.19
390 TestNetworkPlugins/group/flannel/Localhost 0.15
391 TestNetworkPlugins/group/flannel/HairPin 0.16
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
393 TestNetworkPlugins/group/bridge/NetCatPod 13.47
394 TestNetworkPlugins/group/bridge/DNS 0.17
395 TestNetworkPlugins/group/bridge/Localhost 0.15
396 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (5.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-763163 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-763163 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.750556338s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0217 12:33:19.380382  860382 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0217 12:33:19.380470  860382 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-855004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-763163
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-763163: exit status 85 (415.714489ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-763163 | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC |          |
	|         | -p download-only-763163        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 12:33:13
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 12:33:13.679712  860387 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:33:13.679900  860387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:33:13.679915  860387 out.go:358] Setting ErrFile to fd 2...
	I0217 12:33:13.679921  860387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:33:13.680192  860387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
	W0217 12:33:13.680327  860387 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20427-855004/.minikube/config/config.json: open /home/jenkins/minikube-integration/20427-855004/.minikube/config/config.json: no such file or directory
	I0217 12:33:13.680718  860387 out.go:352] Setting JSON to true
	I0217 12:33:13.681547  860387 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18942,"bootTime":1739776652,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0217 12:33:13.681623  860387 start.go:139] virtualization:  
	I0217 12:33:13.685938  860387 out.go:97] [download-only-763163] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0217 12:33:13.686109  860387 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20427-855004/.minikube/cache/preloaded-tarball: no such file or directory
	I0217 12:33:13.686219  860387 notify.go:220] Checking for updates...
	I0217 12:33:13.689210  860387 out.go:169] MINIKUBE_LOCATION=20427
	I0217 12:33:13.692341  860387 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 12:33:13.695206  860387 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	I0217 12:33:13.698143  860387 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	I0217 12:33:13.701016  860387 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0217 12:33:13.706570  860387 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0217 12:33:13.706890  860387 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 12:33:13.733391  860387 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 12:33:13.733501  860387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:33:13.788247  860387 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-17 12:33:13.779250358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:33:13.788361  860387 docker.go:318] overlay module found
	I0217 12:33:13.791365  860387 out.go:97] Using the docker driver based on user configuration
	I0217 12:33:13.791395  860387 start.go:297] selected driver: docker
	I0217 12:33:13.791405  860387 start.go:901] validating driver "docker" against <nil>
	I0217 12:33:13.791524  860387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:33:13.844301  860387 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-17 12:33:13.83489895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:33:13.844524  860387 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0217 12:33:13.844813  860387 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0217 12:33:13.844974  860387 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0217 12:33:13.848137  860387 out.go:169] Using Docker driver with root privileges
	I0217 12:33:13.851089  860387 cni.go:84] Creating CNI manager for ""
	I0217 12:33:13.851151  860387 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0217 12:33:13.851166  860387 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0217 12:33:13.851252  860387 start.go:340] cluster config:
	{Name:download-only-763163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-763163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 12:33:13.854181  860387 out.go:97] Starting "download-only-763163" primary control-plane node in "download-only-763163" cluster
	I0217 12:33:13.854204  860387 cache.go:121] Beginning downloading kic base image for docker with crio
	I0217 12:33:13.856914  860387 out.go:97] Pulling base image v0.0.46-1739182054-20387 ...
	I0217 12:33:13.856941  860387 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0217 12:33:13.857047  860387 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
	I0217 12:33:13.873699  860387 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad to local cache
	I0217 12:33:13.874632  860387 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local cache directory
	I0217 12:33:13.874745  860387 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad to local cache
	I0217 12:33:13.908895  860387 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0217 12:33:13.908921  860387 cache.go:56] Caching tarball of preloaded images
	I0217 12:33:13.909739  860387 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0217 12:33:13.913023  860387 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0217 12:33:13.913048  860387 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0217 12:33:13.992461  860387 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20427-855004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0217 12:33:17.508086  860387 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0217 12:33:17.508293  860387 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20427-855004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-763163 host does not exist
	  To start a cluster, run: "minikube start -p download-only-763163"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-763163
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (4.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-118950 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-118950 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.690342025s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (4.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0217 12:33:25.029807  860382 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0217 12:33:25.029845  860382 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-855004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-118950
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-118950: exit status 85 (85.379999ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-763163 | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC |                     |
	|         | -p download-only-763163        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC | 17 Feb 25 12:33 UTC |
	| delete  | -p download-only-763163        | download-only-763163 | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC | 17 Feb 25 12:33 UTC |
	| start   | -o=json --download-only        | download-only-118950 | jenkins | v1.35.0 | 17 Feb 25 12:33 UTC |                     |
	|         | -p download-only-118950        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 12:33:20
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 12:33:20.392625  860592 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:33:20.392749  860592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:33:20.392760  860592 out.go:358] Setting ErrFile to fd 2...
	I0217 12:33:20.392766  860592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:33:20.393121  860592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
	I0217 12:33:20.393596  860592 out.go:352] Setting JSON to true
	I0217 12:33:20.394528  860592 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18949,"bootTime":1739776652,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0217 12:33:20.394631  860592 start.go:139] virtualization:  
	I0217 12:33:20.399183  860592 out.go:97] [download-only-118950] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0217 12:33:20.399444  860592 notify.go:220] Checking for updates...
	I0217 12:33:20.403536  860592 out.go:169] MINIKUBE_LOCATION=20427
	I0217 12:33:20.407191  860592 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 12:33:20.410691  860592 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	I0217 12:33:20.414191  860592 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	I0217 12:33:20.417562  860592 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0217 12:33:20.423979  860592 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0217 12:33:20.424284  860592 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 12:33:20.453949  860592 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 12:33:20.454052  860592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:33:20.509347  860592 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-17 12:33:20.499784678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:33:20.509461  860592 docker.go:318] overlay module found
	I0217 12:33:20.512661  860592 out.go:97] Using the docker driver based on user configuration
	I0217 12:33:20.512699  860592 start.go:297] selected driver: docker
	I0217 12:33:20.512706  860592 start.go:901] validating driver "docker" against <nil>
	I0217 12:33:20.512813  860592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:33:20.564181  860592 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-17 12:33:20.555692841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:33:20.564397  860592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0217 12:33:20.564685  860592 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0217 12:33:20.564847  860592 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0217 12:33:20.567985  860592 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-118950 host does not exist
	  To start a cluster, run: "minikube start -p download-only-118950"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-118950
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0217 12:33:26.355165  860382 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-810958 --alsologtostderr --binary-mirror http://127.0.0.1:43611 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-810958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-810958
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-925274
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-925274: exit status 85 (82.239565ms)

                                                
                                                
-- stdout --
	* Profile "addons-925274" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-925274"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-925274
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-925274: exit status 85 (79.372117ms)

                                                
                                                
-- stdout --
	* Profile "addons-925274" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-925274"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (178.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-925274 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-925274 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m58.757642627s)
--- PASS: TestAddons/Setup (178.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-925274 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-925274 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.95s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-925274 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-925274 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e829d43d-6af7-4a92-91b5-fc6e84543751] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e829d43d-6af7-4a92-91b5-fc6e84543751] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003351667s
addons_test.go:633: (dbg) Run:  kubectl --context addons-925274 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-925274 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-925274 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-925274 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.95s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 16.522471ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-gzrk4" [bc98d5d6-113a-4202-814b-28cac3908f75] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004100082s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-f8jhn" [db9d1473-4392-40d5-9a55-91cf8512c525] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003746253s
addons_test.go:331: (dbg) Run:  kubectl --context addons-925274 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-925274 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-925274 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.513064629s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 ip
2025/02/17 12:37:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5lwkf" [93bb6a65-89e4-479b-94ad-68f3312a6e36] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004261243s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-925274 addons disable inspektor-gadget --alsologtostderr -v=1: (5.839063314s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.20589ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-m6442" [1d84bb16-0b65-4275-8143-4a1f4c61003f] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004381238s
addons_test.go:402: (dbg) Run:  kubectl --context addons-925274 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0217 12:37:20.864088  860382 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0217 12:37:20.871540  860382 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0217 12:37:20.871571  860382 kapi.go:107] duration metric: took 10.841868ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 10.853355ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-925274 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-925274 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [655d85b6-e877-43e9-8055-85e315b95b54] Pending
helpers_test.go:344: "task-pv-pod" [655d85b6-e877-43e9-8055-85e315b95b54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [655d85b6-e877-43e9-8055-85e315b95b54] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003229248s
addons_test.go:511: (dbg) Run:  kubectl --context addons-925274 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-925274 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-925274 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-925274 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-925274 delete pod task-pv-pod: (1.041972033s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-925274 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-925274 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-925274 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [763e832b-be8f-482c-a314-0135b165ded3] Pending
helpers_test.go:344: "task-pv-pod-restore" [763e832b-be8f-482c-a314-0135b165ded3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [763e832b-be8f-482c-a314-0135b165ded3] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003374625s
addons_test.go:553: (dbg) Run:  kubectl --context addons-925274 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-925274 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-925274 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-925274 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.841084789s)
--- PASS: TestAddons/parallel/CSI (57.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-925274 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-925274 --alsologtostderr -v=1: (1.026499771s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-vptdk" [10bf1b52-eba0-4d4a-92db-19c3f2ef07a8] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-vptdk" [10bf1b52-eba0-4d4a-92db-19c3f2ef07a8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-vptdk" [10bf1b52-eba0-4d4a-92db-19c3f2ef07a8] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003214913s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-925274 addons disable headlamp --alsologtostderr -v=1: (5.853809921s)
--- PASS: TestAddons/parallel/Headlamp (17.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-6rtrr" [f5f4fd13-5261-48ba-aa5a-b27083bb18d5] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004015518s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-925274 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-925274 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-925274 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [51818ae8-4d08-4690-98da-88720730b38c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [51818ae8-4d08-4690-98da-88720730b38c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [51818ae8-4d08-4690-98da-88720730b38c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003461063s
addons_test.go:906: (dbg) Run:  kubectl --context addons-925274 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 ssh "cat /opt/local-path-provisioner/pvc-b18182c0-ddd9-4a0d-a7b0-1917dbefd7b4_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-925274 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-925274 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-s4mtb" [b1b07897-040f-443d-8dcc-0aea2b69a387] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.01053062s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-78f74" [04df4b5d-9472-47af-9d28-e9332668c93f] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003655042s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-925274 addons disable yakd --alsologtostderr -v=1: (6.003629609s)
--- PASS: TestAddons/parallel/Yakd (12.01s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-925274
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-925274: (11.881744847s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-925274
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-925274
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-925274
--- PASS: TestAddons/StoppedEnableDisable (12.17s)

                                                
                                    
x
+
TestCertOptions (38.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-807275 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-807275 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.014171937s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-807275 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-807275 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-807275 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-807275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-807275
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-807275: (2.121134727s)
--- PASS: TestCertOptions (38.82s)

                                                
                                    
x
+
TestCertExpiration (245.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-661477 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-661477 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.889586607s)
E0217 13:18:39.059628  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-661477 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0217 13:21:26.653531  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-661477 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.318352665s)
helpers_test.go:175: Cleaning up "cert-expiration-661477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-661477
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-661477: (2.602035161s)
--- PASS: TestCertExpiration (245.81s)

                                                
                                    
x
+
TestForceSystemdFlag (43.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-985279 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0217 13:16:26.653111  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-985279 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.752323467s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-985279 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-985279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-985279
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-985279: (2.499917496s)
--- PASS: TestForceSystemdFlag (43.58s)

                                                
                                    
x
+
TestForceSystemdEnv (44.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-210699 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-210699 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.579238272s)
helpers_test.go:175: Cleaning up "force-systemd-env-210699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-210699
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-210699: (2.832113601s)
--- PASS: TestForceSystemdEnv (44.41s)

                                                
                                    
x
+
TestErrorSpam/setup (32.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-027202 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-027202 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-027202 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-027202 --driver=docker  --container-runtime=crio: (32.061386294s)
--- PASS: TestErrorSpam/setup (32.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 stop: (1.282483423s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-027202 --log_dir /tmp/nospam-027202 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20427-855004/.minikube/files/etc/test/nested/copy/860382/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-935264 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0217 12:41:26.657864  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:26.664385  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:26.675753  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:26.697142  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:26.738539  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:26.819912  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:26.981273  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:27.303177  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:27.945210  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:29.226560  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:31.788333  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:36.910046  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:47.152038  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-935264 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (49.58478301s)
--- PASS: TestFunctional/serial/StartWithProxy (49.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0217 12:42:03.971686  860382 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-935264 --alsologtostderr -v=8
E0217 12:42:07.633777  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-935264 --alsologtostderr -v=8: (15.3795915s)
functional_test.go:680: soft start took 15.384977095s for "functional-935264" cluster.
I0217 12:42:19.351611  860382 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (15.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-935264 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-935264 cache add registry.k8s.io/pause:3.1: (1.530329955s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-935264 cache add registry.k8s.io/pause:3.3: (1.644505493s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-935264 cache add registry.k8s.io/pause:latest: (1.405155071s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-935264 /tmp/TestFunctionalserialCacheCmdcacheadd_local784277234/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 cache add minikube-local-cache-test:functional-935264
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 cache delete minikube-local-cache-test:functional-935264
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-935264
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-935264 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.206055ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-arm64 -p functional-935264 cache reload: (1.226044342s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 kubectl -- --context functional-935264 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-935264 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (60.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-935264 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0217 12:42:48.595183  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-935264 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m0.105799643s)
functional_test.go:778: restart took 1m0.105914676s for "functional-935264" cluster.
I0217 12:43:28.619888  860382 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (60.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-935264 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-935264 logs: (1.777434444s)
--- PASS: TestFunctional/serial/LogsCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 logs --file /tmp/TestFunctionalserialLogsFileCmd1151760536/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-935264 logs --file /tmp/TestFunctionalserialLogsFileCmd1151760536/001/logs.txt: (1.775972958s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-935264 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-935264
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-935264: exit status 115 (603.1112ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31736 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-935264 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-935264 delete -f testdata/invalidsvc.yaml: (1.08543132s)
--- PASS: TestFunctional/serial/InvalidService (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-935264 config get cpus: exit status 14 (80.927494ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-935264 config get cpus: exit status 14 (71.979275ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-935264 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-935264 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 887707: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-935264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-935264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.519618ms)

                                                
                                                
-- stdout --
	* [functional-935264] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:44:10.802161  887406 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:44:10.802282  887406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:44:10.802293  887406 out.go:358] Setting ErrFile to fd 2...
	I0217 12:44:10.802298  887406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:44:10.802551  887406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
	I0217 12:44:10.802917  887406 out.go:352] Setting JSON to false
	I0217 12:44:10.803961  887406 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19599,"bootTime":1739776652,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0217 12:44:10.804037  887406 start.go:139] virtualization:  
	I0217 12:44:10.807240  887406 out.go:177] * [functional-935264] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0217 12:44:10.810876  887406 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 12:44:10.811036  887406 notify.go:220] Checking for updates...
	I0217 12:44:10.816403  887406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 12:44:10.819235  887406 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	I0217 12:44:10.822025  887406 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	I0217 12:44:10.824865  887406 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0217 12:44:10.828254  887406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 12:44:10.831611  887406 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0217 12:44:10.832266  887406 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 12:44:10.863341  887406 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 12:44:10.863477  887406 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:44:10.923558  887406 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-17 12:44:10.913881474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:44:10.923674  887406 docker.go:318] overlay module found
	I0217 12:44:10.928707  887406 out.go:177] * Using the docker driver based on existing profile
	I0217 12:44:10.931667  887406 start.go:297] selected driver: docker
	I0217 12:44:10.931690  887406 start.go:901] validating driver "docker" against &{Name:functional-935264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-935264 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 12:44:10.931801  887406 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 12:44:10.935550  887406 out.go:201] 
	W0217 12:44:10.938474  887406 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0217 12:44:10.941385  887406 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-935264 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-935264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-935264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (204.688245ms)

                                                
                                                
-- stdout --
	* [functional-935264] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:44:10.609741  887360 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:44:10.609937  887360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:44:10.609969  887360 out.go:358] Setting ErrFile to fd 2...
	I0217 12:44:10.609991  887360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:44:10.610962  887360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
	I0217 12:44:10.611439  887360 out.go:352] Setting JSON to false
	I0217 12:44:10.612442  887360 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19599,"bootTime":1739776652,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0217 12:44:10.612565  887360 start.go:139] virtualization:  
	I0217 12:44:10.616647  887360 out.go:177] * [functional-935264] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0217 12:44:10.619945  887360 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 12:44:10.619961  887360 notify.go:220] Checking for updates...
	I0217 12:44:10.625955  887360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 12:44:10.628880  887360 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	I0217 12:44:10.631977  887360 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	I0217 12:44:10.634855  887360 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0217 12:44:10.637925  887360 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 12:44:10.641423  887360 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0217 12:44:10.642031  887360 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 12:44:10.677167  887360 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 12:44:10.677340  887360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:44:10.731986  887360 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-17 12:44:10.721253016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:44:10.732126  887360 docker.go:318] overlay module found
	I0217 12:44:10.735332  887360 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0217 12:44:10.738149  887360 start.go:297] selected driver: docker
	I0217 12:44:10.738170  887360 start.go:901] validating driver "docker" against &{Name:functional-935264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-935264 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 12:44:10.738282  887360 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 12:44:10.741856  887360 out.go:201] 
	W0217 12:44:10.744633  887360 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0217 12:44:10.747504  887360 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 status -o json
E0217 12:44:10.516583  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-935264 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-935264 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-dwh5n" [f21f22af-e60e-46a8-9684-fb27ce53ef73] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-dwh5n" [f21f22af-e60e-46a8-9684-fb27ce53ef73] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.011398496s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:30574
functional_test.go:1692: http://192.168.49.2:30574: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-dwh5n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30574
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9d403fb4-d6e8-4f3f-8d1f-26daa9e3eb51] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.041613221s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-935264 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-935264 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-935264 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-935264 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9c4cf771-117d-4ea4-99bb-6679155b589a] Pending
helpers_test.go:344: "sp-pod" [9c4cf771-117d-4ea4-99bb-6679155b589a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9c4cf771-117d-4ea4-99bb-6679155b589a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003395846s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-935264 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-935264 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-935264 delete -f testdata/storage-provisioner/pod.yaml: (1.014726672s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-935264 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f4bd1839-4608-4fb6-b381-8b1299d0e96a] Pending
helpers_test.go:344: "sp-pod" [f4bd1839-4608-4fb6-b381-8b1299d0e96a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002931305s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-935264 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh -n functional-935264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 cp functional-935264:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd802603933/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh -n functional-935264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh -n functional-935264 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/860382/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo cat /etc/test/nested/copy/860382/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/860382.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo cat /etc/ssl/certs/860382.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/860382.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo cat /usr/share/ca-certificates/860382.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/8603822.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo cat /etc/ssl/certs/8603822.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/8603822.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo cat /usr/share/ca-certificates/8603822.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-935264 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-935264 ssh "sudo systemctl is-active docker": exit status 1 (392.919497ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-935264 ssh "sudo systemctl is-active containerd": exit status 1 (319.917603ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-935264 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-935264 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-935264 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-935264 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 885168: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-935264 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-935264 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1a2f1212-3495-4801-9ee5-fb0437f23eb9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1a2f1212-3495-4801-9ee5-fb0437f23eb9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004089291s
I0217 12:43:47.549042  860382 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-935264 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.81.224 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-935264 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-935264 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-935264 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-bjs9t" [10166041-991c-464e-895d-4cf75171cffd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-bjs9t" [10166041-991c-464e-895d-4cf75171cffd] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003516379s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "367.019341ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "60.049354ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "352.288368ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "55.451481ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdany-port4193165841/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739796246961889863" to /tmp/TestFunctionalparallelMountCmdany-port4193165841/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739796246961889863" to /tmp/TestFunctionalparallelMountCmdany-port4193165841/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739796246961889863" to /tmp/TestFunctionalparallelMountCmdany-port4193165841/001/test-1739796246961889863
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 17 12:44 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 17 12:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 17 12:44 test-1739796246961889863
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh cat /mount-9p/test-1739796246961889863
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-935264 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [aa6905de-1b04-46b3-b0ae-bb513c89cf6d] Pending
helpers_test.go:344: "busybox-mount" [aa6905de-1b04-46b3-b0ae-bb513c89cf6d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [aa6905de-1b04-46b3-b0ae-bb513c89cf6d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [aa6905de-1b04-46b3-b0ae-bb513c89cf6d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00356061s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-935264 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdany-port4193165841/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 service list -o json
functional_test.go:1511: Took "596.000368ms" to run "out/minikube-linux-arm64 -p functional-935264 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:30460
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:30460
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdspecific-port2181392363/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-935264 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (373.698768ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0217 12:44:16.041092  860382 retry.go:31] will retry after 456.545629ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdspecific-port2181392363/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-935264 ssh "sudo umount -f /mount-9p": exit status 1 (389.181139ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-935264 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdspecific-port2181392363/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1876326477/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1876326477/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1876326477/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-935264 ssh "findmnt -T" /mount1: exit status 1 (995.228594ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0217 12:44:18.836167  860382 retry.go:31] will retry after 313.017526ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-935264 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1876326477/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1876326477/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-935264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1876326477/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-935264 version -o=json --components: (1.347520748s)
--- PASS: TestFunctional/parallel/Version/components (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-935264 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-935264
localhost/kicbase/echo-server:functional-935264
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-935264 image ls --format short --alsologtostderr:
I0217 12:44:29.176794  890285 out.go:345] Setting OutFile to fd 1 ...
I0217 12:44:29.177015  890285 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:44:29.177028  890285 out.go:358] Setting ErrFile to fd 2...
I0217 12:44:29.177035  890285 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:44:29.177345  890285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
I0217 12:44:29.178056  890285 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0217 12:44:29.178227  890285 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0217 12:44:29.178746  890285 cli_runner.go:164] Run: docker container inspect functional-935264 --format={{.State.Status}}
I0217 12:44:29.208919  890285 ssh_runner.go:195] Run: systemctl --version
I0217 12:44:29.208976  890285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-935264
I0217 12:44:29.238724  890285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/functional-935264/id_rsa Username:docker}
I0217 12:44:29.333127  890285 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-935264 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | 7fc9d4aa817aa | 143MB  |
| registry.k8s.io/kube-scheduler          | v1.32.1            | ddb38cac617cb | 69MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e124fbed851d7 | 98.3MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| docker.io/library/nginx                 | alpine             | cedb667e1a7b4 | 50.8MB |
| docker.io/library/nginx                 | latest             | 9b1b7be1ffa60 | 201MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/minikube-local-cache-test     | functional-935264  | f9bfef706f94c | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 2933761aa7ada | 88.2MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/kindest/kindnetd              | v20250214-acbabc1a | ee75e27fff91c | 99MB   |
| localhost/kicbase/echo-server           | functional-935264  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 265c2dedf28ab | 95MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-935264 image ls --format table --alsologtostderr:
I0217 12:44:29.482427  890356 out.go:345] Setting OutFile to fd 1 ...
I0217 12:44:29.482751  890356 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:44:29.482760  890356 out.go:358] Setting ErrFile to fd 2...
I0217 12:44:29.482765  890356 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:44:29.483042  890356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
I0217 12:44:29.483732  890356 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0217 12:44:29.483886  890356 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0217 12:44:29.484399  890356 cli_runner.go:164] Run: docker container inspect functional-935264 --format={{.State.Status}}
I0217 12:44:29.506454  890356 ssh_runner.go:195] Run: systemctl --version
I0217 12:44:29.506516  890356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-935264
I0217 12:44:29.533434  890356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/functional-935264/id_rsa Username:docker}
I0217 12:44:29.626824  890356 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-935264 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3","registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"88241478"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1
ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f","repoDigests":["docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955","docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"99018290"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce20
6e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19","repoDigests":["registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244","registry.k8s.io/kube-apiserver@sha2
56:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"94991840"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292e
d370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-935264"],"size":"4788229"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd2
1e219d0af8bc0591","docker.io/library/nginx@sha256:56568860b56c0bc8099fe1b2d84f43a18939e217e6c619126214c0f71bc27626"],"repoTags":["docker.io/library/nginx:alpine"],"size":"50780648"},{"id":"9b1b7be1ffa607d40d545607d3fdf441f08553468adec5588fb58499ad77fe58","repoDigests":["docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34","docker.io/library/nginx@sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd"],"repoTags":["docker.io/library/nginx:latest"],"size":"201397159"},{"id":"f9bfef706f94cfb64d9ee5d8379b4addc48bf913bf5130f70df319ae40983c58","repoDigests":["localhost/minikube-local-cache-test@sha256:8ee0f7e2b291b02534207abc4c0c54fcc3f7b8682bab0db20159ac03378cddfe"],"repoTags":["localhost/minikube-local-cache-test:functional-935264"],"size":"3330"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1","registry.k8s.io/etcd@sha
256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"143226622"},{"id":"e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"98313623"},{"id":"ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1","registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"68973892"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-935264 image ls --format json --alsologtostderr:
I0217 12:44:29.476345  890351 out.go:345] Setting OutFile to fd 1 ...
I0217 12:44:29.476537  890351 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:44:29.476567  890351 out.go:358] Setting ErrFile to fd 2...
I0217 12:44:29.476589  890351 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:44:29.476845  890351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
I0217 12:44:29.477561  890351 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0217 12:44:29.477734  890351 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0217 12:44:29.478300  890351 cli_runner.go:164] Run: docker container inspect functional-935264 --format={{.State.Status}}
I0217 12:44:29.497229  890351 ssh_runner.go:195] Run: systemctl --version
I0217 12:44:29.497289  890351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-935264
I0217 12:44:29.522292  890351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/functional-935264/id_rsa Username:docker}
I0217 12:44:29.616200  890351 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-935264 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9b1b7be1ffa607d40d545607d3fdf441f08553468adec5588fb58499ad77fe58
repoDigests:
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
- docker.io/library/nginx@sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd
repoTags:
- docker.io/library/nginx:latest
size: "201397159"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-935264
size: "4788229"
- id: ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "68973892"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "94991840"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: f9bfef706f94cfb64d9ee5d8379b4addc48bf913bf5130f70df319ae40983c58
repoDigests:
- localhost/minikube-local-cache-test@sha256:8ee0f7e2b291b02534207abc4c0c54fcc3f7b8682bab0db20159ac03378cddfe
repoTags:
- localhost/minikube-local-cache-test:functional-935264
size: "3330"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "143226622"
- id: ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f
repoDigests:
- docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "99018290"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
- docker.io/library/nginx@sha256:56568860b56c0bc8099fe1b2d84f43a18939e217e6c619126214c0f71bc27626
repoTags:
- docker.io/library/nginx:alpine
size: "50780648"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "88241478"
- id: e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "98313623"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-935264 image ls --format yaml --alsologtostderr:
I0217 12:44:29.181019  890286 out.go:345] Setting OutFile to fd 1 ...
I0217 12:44:29.181205  890286 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:44:29.181234  890286 out.go:358] Setting ErrFile to fd 2...
I0217 12:44:29.181255  890286 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:44:29.181526  890286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
I0217 12:44:29.182226  890286 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0217 12:44:29.182390  890286 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0217 12:44:29.182964  890286 cli_runner.go:164] Run: docker container inspect functional-935264 --format={{.State.Status}}
I0217 12:44:29.214817  890286 ssh_runner.go:195] Run: systemctl --version
I0217 12:44:29.214874  890286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-935264
I0217 12:44:29.237570  890286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/functional-935264/id_rsa Username:docker}
I0217 12:44:29.329208  890286 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-935264 ssh pgrep buildkitd: exit status 1 (274.883613ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image build -t localhost/my-image:functional-935264 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-935264 image build -t localhost/my-image:functional-935264 testdata/build --alsologtostderr: (3.164029693s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-arm64 -p functional-935264 image build -t localhost/my-image:functional-935264 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cc1f0cba276
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-935264
--> 81c7cd1752a
Successfully tagged localhost/my-image:functional-935264
81c7cd1752aabc050dd8540a4aa5b56e25a3f47f2e2f91c337961fa56f404298
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-935264 image build -t localhost/my-image:functional-935264 testdata/build --alsologtostderr:
I0217 12:44:30.004393  890475 out.go:345] Setting OutFile to fd 1 ...
I0217 12:44:30.005302  890475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:44:30.005379  890475 out.go:358] Setting ErrFile to fd 2...
I0217 12:44:30.005402  890475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:44:30.005740  890475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
I0217 12:44:30.006680  890475 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0217 12:44:30.008210  890475 config.go:182] Loaded profile config "functional-935264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0217 12:44:30.008833  890475 cli_runner.go:164] Run: docker container inspect functional-935264 --format={{.State.Status}}
I0217 12:44:30.046858  890475 ssh_runner.go:195] Run: systemctl --version
I0217 12:44:30.046947  890475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-935264
I0217 12:44:30.068786  890475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/functional-935264/id_rsa Username:docker}
I0217 12:44:30.160786  890475 build_images.go:161] Building image from path: /tmp/build.2542605408.tar
I0217 12:44:30.160874  890475 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0217 12:44:30.171024  890475 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2542605408.tar
I0217 12:44:30.174903  890475 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2542605408.tar: stat -c "%s %y" /var/lib/minikube/build/build.2542605408.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2542605408.tar': No such file or directory
I0217 12:44:30.174942  890475 ssh_runner.go:362] scp /tmp/build.2542605408.tar --> /var/lib/minikube/build/build.2542605408.tar (3072 bytes)
I0217 12:44:30.207956  890475 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2542605408
I0217 12:44:30.217821  890475 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2542605408 -xf /var/lib/minikube/build/build.2542605408.tar
I0217 12:44:30.227965  890475 crio.go:315] Building image: /var/lib/minikube/build/build.2542605408
I0217 12:44:30.228086  890475 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-935264 /var/lib/minikube/build/build.2542605408 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0217 12:44:33.089066  890475 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-935264 /var/lib/minikube/build/build.2542605408 --cgroup-manager=cgroupfs: (2.860938815s)
I0217 12:44:33.089144  890475 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2542605408
I0217 12:44:33.098454  890475 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2542605408.tar
I0217 12:44:33.107872  890475 build_images.go:217] Built localhost/my-image:functional-935264 from /tmp/build.2542605408.tar
I0217 12:44:33.107954  890475 build_images.go:133] succeeded building to: functional-935264
I0217 12:44:33.107985  890475 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-935264
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image load --daemon kicbase/echo-server:functional-935264 --alsologtostderr
2025/02/17 12:44:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:372: (dbg) Done: out/minikube-linux-arm64 -p functional-935264 image load --daemon kicbase/echo-server:functional-935264 --alsologtostderr: (1.134798641s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image load --daemon kicbase/echo-server:functional-935264 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-935264
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image load --daemon kicbase/echo-server:functional-935264 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image save kicbase/echo-server:functional-935264 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image rm kicbase/echo-server:functional-935264 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-935264
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-935264 image save --daemon kicbase/echo-server:functional-935264 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-935264
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-935264
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-935264
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-935264
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (177.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-145939 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0217 12:46:26.653408  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:46:54.358412  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-145939 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m56.345317374s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (177.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-145939 -- rollout status deployment/busybox: (5.488411084s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-5bmdq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-f7t7b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-jpvcz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-5bmdq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-f7t7b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-jpvcz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-5bmdq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-f7t7b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-jpvcz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-5bmdq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-5bmdq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-f7t7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-f7t7b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-jpvcz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-145939 -- exec busybox-58667487b6-jpvcz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (37.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-145939 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-145939 -v=7 --alsologtostderr: (36.088876693s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr: (1.025532622s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (37.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-145939 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.015757453s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-145939 status --output json -v=7 --alsologtostderr: (1.006556473s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp testdata/cp-test.txt ha-145939:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4137233472/001/cp-test_ha-145939.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939:/home/docker/cp-test.txt ha-145939-m02:/home/docker/cp-test_ha-145939_ha-145939-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m02 "sudo cat /home/docker/cp-test_ha-145939_ha-145939-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939:/home/docker/cp-test.txt ha-145939-m03:/home/docker/cp-test_ha-145939_ha-145939-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m03 "sudo cat /home/docker/cp-test_ha-145939_ha-145939-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939:/home/docker/cp-test.txt ha-145939-m04:/home/docker/cp-test_ha-145939_ha-145939-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m04 "sudo cat /home/docker/cp-test_ha-145939_ha-145939-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp testdata/cp-test.txt ha-145939-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4137233472/001/cp-test_ha-145939-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m02:/home/docker/cp-test.txt ha-145939:/home/docker/cp-test_ha-145939-m02_ha-145939.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939 "sudo cat /home/docker/cp-test_ha-145939-m02_ha-145939.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m02:/home/docker/cp-test.txt ha-145939-m03:/home/docker/cp-test_ha-145939-m02_ha-145939-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m03 "sudo cat /home/docker/cp-test_ha-145939-m02_ha-145939-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m02:/home/docker/cp-test.txt ha-145939-m04:/home/docker/cp-test_ha-145939-m02_ha-145939-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m04 "sudo cat /home/docker/cp-test_ha-145939-m02_ha-145939-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp testdata/cp-test.txt ha-145939-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4137233472/001/cp-test_ha-145939-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m03:/home/docker/cp-test.txt ha-145939:/home/docker/cp-test_ha-145939-m03_ha-145939.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939 "sudo cat /home/docker/cp-test_ha-145939-m03_ha-145939.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m03:/home/docker/cp-test.txt ha-145939-m02:/home/docker/cp-test_ha-145939-m03_ha-145939-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m02 "sudo cat /home/docker/cp-test_ha-145939-m03_ha-145939-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m03:/home/docker/cp-test.txt ha-145939-m04:/home/docker/cp-test_ha-145939-m03_ha-145939-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m04 "sudo cat /home/docker/cp-test_ha-145939-m03_ha-145939-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp testdata/cp-test.txt ha-145939-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4137233472/001/cp-test_ha-145939-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m04:/home/docker/cp-test.txt ha-145939:/home/docker/cp-test_ha-145939-m04_ha-145939.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939 "sudo cat /home/docker/cp-test_ha-145939-m04_ha-145939.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m04:/home/docker/cp-test.txt ha-145939-m02:/home/docker/cp-test_ha-145939-m04_ha-145939-m02.txt
E0217 12:48:39.058272  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:48:39.064655  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:48:39.076094  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:48:39.097546  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:48:39.139144  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:48:39.221137  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:48:39.382932  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m04 "sudo cat /home/docker/cp-test.txt"
E0217 12:48:39.704965  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m02 "sudo cat /home/docker/cp-test_ha-145939-m04_ha-145939-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 cp ha-145939-m04:/home/docker/cp-test.txt ha-145939-m03:/home/docker/cp-test_ha-145939-m04_ha-145939-m03.txt
E0217 12:48:40.346810  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 ssh -n ha-145939-m03 "sudo cat /home/docker/cp-test_ha-145939-m04_ha-145939-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 node stop m02 -v=7 --alsologtostderr
E0217 12:48:41.628462  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:48:44.190824  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:48:49.312909  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-145939 node stop m02 -v=7 --alsologtostderr: (11.949195187s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr: exit status 7 (755.967266ms)

                                                
                                                
-- stdout --
	ha-145939
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-145939-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-145939-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-145939-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:48:53.098845  906351 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:48:53.099058  906351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:48:53.099089  906351 out.go:358] Setting ErrFile to fd 2...
	I0217 12:48:53.099108  906351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:48:53.099429  906351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
	I0217 12:48:53.099661  906351 out.go:352] Setting JSON to false
	I0217 12:48:53.099764  906351 mustload.go:65] Loading cluster: ha-145939
	I0217 12:48:53.099905  906351 notify.go:220] Checking for updates...
	I0217 12:48:53.102323  906351 config.go:182] Loaded profile config "ha-145939": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0217 12:48:53.102392  906351 status.go:174] checking status of ha-145939 ...
	I0217 12:48:53.103136  906351 cli_runner.go:164] Run: docker container inspect ha-145939 --format={{.State.Status}}
	I0217 12:48:53.123395  906351 status.go:371] ha-145939 host status = "Running" (err=<nil>)
	I0217 12:48:53.123419  906351 host.go:66] Checking if "ha-145939" exists ...
	I0217 12:48:53.123863  906351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-145939
	I0217 12:48:53.150907  906351 host.go:66] Checking if "ha-145939" exists ...
	I0217 12:48:53.151270  906351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:48:53.151334  906351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-145939
	I0217 12:48:53.172467  906351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/ha-145939/id_rsa Username:docker}
	I0217 12:48:53.273295  906351 ssh_runner.go:195] Run: systemctl --version
	I0217 12:48:53.279233  906351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:48:53.290983  906351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:48:53.368207  906351 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:73 SystemTime:2025-02-17 12:48:53.357998327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:48:53.368835  906351 kubeconfig.go:125] found "ha-145939" server: "https://192.168.49.254:8443"
	I0217 12:48:53.368870  906351 api_server.go:166] Checking apiserver status ...
	I0217 12:48:53.368914  906351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 12:48:53.380148  906351 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1435/cgroup
	I0217 12:48:53.390386  906351 api_server.go:182] apiserver freezer: "11:freezer:/docker/b040eb94e336ecd90c0dd5fa017b4570c3e4c053c9cf03fc57e6da30340cd6c3/crio/crio-642f6dafd7d9ebb1f2193b28b70fc5e1409f904964d24f9bcb4f47dd8e5b0970"
	I0217 12:48:53.390459  906351 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b040eb94e336ecd90c0dd5fa017b4570c3e4c053c9cf03fc57e6da30340cd6c3/crio/crio-642f6dafd7d9ebb1f2193b28b70fc5e1409f904964d24f9bcb4f47dd8e5b0970/freezer.state
	I0217 12:48:53.399317  906351 api_server.go:204] freezer state: "THAWED"
	I0217 12:48:53.399345  906351 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0217 12:48:53.407660  906351 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0217 12:48:53.407692  906351 status.go:463] ha-145939 apiserver status = Running (err=<nil>)
	I0217 12:48:53.407702  906351 status.go:176] ha-145939 status: &{Name:ha-145939 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:48:53.407719  906351 status.go:174] checking status of ha-145939-m02 ...
	I0217 12:48:53.408073  906351 cli_runner.go:164] Run: docker container inspect ha-145939-m02 --format={{.State.Status}}
	I0217 12:48:53.424736  906351 status.go:371] ha-145939-m02 host status = "Stopped" (err=<nil>)
	I0217 12:48:53.424760  906351 status.go:384] host is not running, skipping remaining checks
	I0217 12:48:53.424768  906351 status.go:176] ha-145939-m02 status: &{Name:ha-145939-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:48:53.424788  906351 status.go:174] checking status of ha-145939-m03 ...
	I0217 12:48:53.425105  906351 cli_runner.go:164] Run: docker container inspect ha-145939-m03 --format={{.State.Status}}
	I0217 12:48:53.447091  906351 status.go:371] ha-145939-m03 host status = "Running" (err=<nil>)
	I0217 12:48:53.447119  906351 host.go:66] Checking if "ha-145939-m03" exists ...
	I0217 12:48:53.447425  906351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-145939-m03
	I0217 12:48:53.465796  906351 host.go:66] Checking if "ha-145939-m03" exists ...
	I0217 12:48:53.466110  906351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:48:53.466170  906351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-145939-m03
	I0217 12:48:53.483101  906351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/ha-145939-m03/id_rsa Username:docker}
	I0217 12:48:53.573125  906351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:48:53.585118  906351 kubeconfig.go:125] found "ha-145939" server: "https://192.168.49.254:8443"
	I0217 12:48:53.585150  906351 api_server.go:166] Checking apiserver status ...
	I0217 12:48:53.585223  906351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 12:48:53.595600  906351 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1331/cgroup
	I0217 12:48:53.605281  906351 api_server.go:182] apiserver freezer: "11:freezer:/docker/9d1607a1c36d01d8f843f1bc12601fe44c041a85ace4138679df44e7d2fdf8a0/crio/crio-0130073059a292c42fd640884b26560cd09c085e5e6d336e90c0a6331e11c0bb"
	I0217 12:48:53.605373  906351 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9d1607a1c36d01d8f843f1bc12601fe44c041a85ace4138679df44e7d2fdf8a0/crio/crio-0130073059a292c42fd640884b26560cd09c085e5e6d336e90c0a6331e11c0bb/freezer.state
	I0217 12:48:53.614155  906351 api_server.go:204] freezer state: "THAWED"
	I0217 12:48:53.614222  906351 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0217 12:48:53.622436  906351 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0217 12:48:53.622462  906351 status.go:463] ha-145939-m03 apiserver status = Running (err=<nil>)
	I0217 12:48:53.622471  906351 status.go:176] ha-145939-m03 status: &{Name:ha-145939-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:48:53.622487  906351 status.go:174] checking status of ha-145939-m04 ...
	I0217 12:48:53.622838  906351 cli_runner.go:164] Run: docker container inspect ha-145939-m04 --format={{.State.Status}}
	I0217 12:48:53.647579  906351 status.go:371] ha-145939-m04 host status = "Running" (err=<nil>)
	I0217 12:48:53.647623  906351 host.go:66] Checking if "ha-145939-m04" exists ...
	I0217 12:48:53.648188  906351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-145939-m04
	I0217 12:48:53.666316  906351 host.go:66] Checking if "ha-145939-m04" exists ...
	I0217 12:48:53.666621  906351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:48:53.666668  906351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-145939-m04
	I0217 12:48:53.684275  906351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33903 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/ha-145939-m04/id_rsa Username:docker}
	I0217 12:48:53.772993  906351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:48:53.788830  906351 status.go:176] ha-145939-m04 status: &{Name:ha-145939-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (24.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 node start m02 -v=7 --alsologtostderr
E0217 12:48:59.555069  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-145939 node start m02 -v=7 --alsologtostderr: (23.392006296s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr: (1.413383741s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (24.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0217 12:49:20.037046  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.314441828s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (176.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-145939 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-145939 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-145939 -v=7 --alsologtostderr: (36.773382686s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-145939 --wait=true -v=7 --alsologtostderr
E0217 12:50:00.998464  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:51:22.922451  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:51:26.652755  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-145939 --wait=true -v=7 --alsologtostderr: (2m19.542178698s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-145939
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (176.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-145939 node delete m03 -v=7 --alsologtostderr: (11.630443762s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-145939 stop -v=7 --alsologtostderr: (35.569644731s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr: exit status 7 (122.568303ms)

                                                
                                                
-- stdout --
	ha-145939
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-145939-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-145939-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:53:06.366958  919904 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:53:06.367418  919904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:53:06.367457  919904 out.go:358] Setting ErrFile to fd 2...
	I0217 12:53:06.367481  919904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:53:06.367791  919904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
	I0217 12:53:06.368074  919904 out.go:352] Setting JSON to false
	I0217 12:53:06.368138  919904 mustload.go:65] Loading cluster: ha-145939
	I0217 12:53:06.368604  919904 config.go:182] Loaded profile config "ha-145939": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0217 12:53:06.368648  919904 status.go:174] checking status of ha-145939 ...
	I0217 12:53:06.369225  919904 cli_runner.go:164] Run: docker container inspect ha-145939 --format={{.State.Status}}
	I0217 12:53:06.369283  919904 notify.go:220] Checking for updates...
	I0217 12:53:06.388752  919904 status.go:371] ha-145939 host status = "Stopped" (err=<nil>)
	I0217 12:53:06.388776  919904 status.go:384] host is not running, skipping remaining checks
	I0217 12:53:06.388797  919904 status.go:176] ha-145939 status: &{Name:ha-145939 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:53:06.388835  919904 status.go:174] checking status of ha-145939-m02 ...
	I0217 12:53:06.389140  919904 cli_runner.go:164] Run: docker container inspect ha-145939-m02 --format={{.State.Status}}
	I0217 12:53:06.409817  919904 status.go:371] ha-145939-m02 host status = "Stopped" (err=<nil>)
	I0217 12:53:06.409843  919904 status.go:384] host is not running, skipping remaining checks
	I0217 12:53:06.409851  919904 status.go:176] ha-145939-m02 status: &{Name:ha-145939-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:53:06.409871  919904 status.go:174] checking status of ha-145939-m04 ...
	I0217 12:53:06.410197  919904 cli_runner.go:164] Run: docker container inspect ha-145939-m04 --format={{.State.Status}}
	I0217 12:53:06.432240  919904 status.go:371] ha-145939-m04 host status = "Stopped" (err=<nil>)
	I0217 12:53:06.432264  919904 status.go:384] host is not running, skipping remaining checks
	I0217 12:53:06.432271  919904 status.go:176] ha-145939-m04 status: &{Name:ha-145939-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (97.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-145939 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0217 12:53:39.058822  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:54:06.763746  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-145939 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m36.715859052s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (97.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-145939 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-145939 --control-plane -v=7 --alsologtostderr: (1m17.38049176s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-145939 status -v=7 --alsologtostderr: (1.061339464s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-790745 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0217 12:56:26.653474  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-790745 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (50.413605613s)
--- PASS: TestJSONOutput/start/Command (50.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-790745 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-790745 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-790745 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-790745 --output=json --user=testUser: (5.85242954s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-839091 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-839091 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.156512ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f472dc5-74af-4bfe-9460-89cca7fd5d67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-839091] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4339eed3-2865-4643-8a29-f5c4ab19370e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20427"}}
	{"specversion":"1.0","id":"8730e767-1776-41bb-b3d4-eb2b833f7009","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ee54b7ba-dcbe-488d-9c6d-de4f230e6cbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig"}}
	{"specversion":"1.0","id":"cf4c72ae-b9ed-4046-81e4-8cbf57963742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube"}}
	{"specversion":"1.0","id":"be875d76-b1ce-4423-b21b-8e24331d988c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"02ad4cd6-4e4e-4aaa-8f65-c9cad35055a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6b467054-5e1a-4226-939c-26c757f6b027","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-839091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-839091
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-510042 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-510042 --network=: (34.179771146s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-510042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-510042
E0217 12:57:49.719785  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-510042: (2.091628449s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.30s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-955226 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-955226 --network=bridge: (31.69394216s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-955226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-955226
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-955226: (2.059957492s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.78s)

                                                
                                    
x
+
TestKicExistingNetwork (30.3s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0217 12:58:24.749307  860382 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0217 12:58:24.768953  860382 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0217 12:58:24.769043  860382 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0217 12:58:24.769069  860382 cli_runner.go:164] Run: docker network inspect existing-network
W0217 12:58:24.785264  860382 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0217 12:58:24.785311  860382 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0217 12:58:24.785327  860382 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0217 12:58:24.785523  860382 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0217 12:58:24.803933  860382 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5f4ffb62d344 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e0:32:60:e7} reservation:<nil>}
I0217 12:58:24.804357  860382 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b95030}
I0217 12:58:24.804387  860382 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0217 12:58:24.804438  860382 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0217 12:58:24.875293  860382 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-313673 --network=existing-network
E0217 12:58:39.058890  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-313673 --network=existing-network: (28.176450068s)
helpers_test.go:175: Cleaning up "existing-network-313673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-313673
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-313673: (1.95995544s)
I0217 12:58:55.032548  860382 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.30s)

                                                
                                    
x
+
TestKicCustomSubnet (35.85s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-501739 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-501739 --subnet=192.168.60.0/24: (33.65186555s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-501739 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-501739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-501739
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-501739: (2.171055043s)
--- PASS: TestKicCustomSubnet (35.85s)

                                                
                                    
x
+
TestKicStaticIP (34.19s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-575422 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-575422 --static-ip=192.168.200.200: (31.903622685s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-575422 ip
helpers_test.go:175: Cleaning up "static-ip-575422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-575422
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-575422: (2.106729281s)
--- PASS: TestKicStaticIP (34.19s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-793909 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-793909 --driver=docker  --container-runtime=crio: (29.686707939s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-796633 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-796633 --driver=docker  --container-runtime=crio: (35.751300202s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-793909
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-796633
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-796633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-796633
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-796633: (2.010229339s)
helpers_test.go:175: Cleaning up "first-793909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-793909
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-793909: (2.282922741s)
--- PASS: TestMinikubeProfile (71.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-779742 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-779742 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.094415137s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-779742 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-781555 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0217 13:01:26.653484  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-781555 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.785361591s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-781555 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-779742 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-779742 --alsologtostderr -v=5: (1.641082996s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-781555 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-781555
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-781555: (1.202049949s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-781555
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-781555: (6.746621016s)
--- PASS: TestMountStart/serial/RestartStopped (7.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-781555 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-337459 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-337459 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.540196875s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-337459 -- rollout status deployment/busybox: (4.441919828s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- exec busybox-58667487b6-b5sbm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- exec busybox-58667487b6-pnzw5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- exec busybox-58667487b6-b5sbm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- exec busybox-58667487b6-pnzw5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- exec busybox-58667487b6-b5sbm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- exec busybox-58667487b6-pnzw5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- exec busybox-58667487b6-b5sbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- exec busybox-58667487b6-b5sbm -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- exec busybox-58667487b6-pnzw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-337459 -- exec busybox-58667487b6-pnzw5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (33.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-337459 -v 3 --alsologtostderr
E0217 13:03:39.058869  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-337459 -v 3 --alsologtostderr: (32.447940543s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (33.14s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-337459 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp testdata/cp-test.txt multinode-337459:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp multinode-337459:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4283268437/001/cp-test_multinode-337459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp multinode-337459:/home/docker/cp-test.txt multinode-337459-m02:/home/docker/cp-test_multinode-337459_multinode-337459-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m02 "sudo cat /home/docker/cp-test_multinode-337459_multinode-337459-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp multinode-337459:/home/docker/cp-test.txt multinode-337459-m03:/home/docker/cp-test_multinode-337459_multinode-337459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m03 "sudo cat /home/docker/cp-test_multinode-337459_multinode-337459-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp testdata/cp-test.txt multinode-337459-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp multinode-337459-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4283268437/001/cp-test_multinode-337459-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp multinode-337459-m02:/home/docker/cp-test.txt multinode-337459:/home/docker/cp-test_multinode-337459-m02_multinode-337459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459 "sudo cat /home/docker/cp-test_multinode-337459-m02_multinode-337459.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp multinode-337459-m02:/home/docker/cp-test.txt multinode-337459-m03:/home/docker/cp-test_multinode-337459-m02_multinode-337459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m03 "sudo cat /home/docker/cp-test_multinode-337459-m02_multinode-337459-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp testdata/cp-test.txt multinode-337459-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp multinode-337459-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4283268437/001/cp-test_multinode-337459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp multinode-337459-m03:/home/docker/cp-test.txt multinode-337459:/home/docker/cp-test_multinode-337459-m03_multinode-337459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459 "sudo cat /home/docker/cp-test_multinode-337459-m03_multinode-337459.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 cp multinode-337459-m03:/home/docker/cp-test.txt multinode-337459-m02:/home/docker/cp-test_multinode-337459-m03_multinode-337459-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 ssh -n multinode-337459-m02 "sudo cat /home/docker/cp-test_multinode-337459-m03_multinode-337459-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-337459 node stop m03: (1.223420267s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-337459 status: exit status 7 (531.597805ms)

                                                
                                                
-- stdout --
	multinode-337459
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-337459-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-337459-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-337459 status --alsologtostderr: exit status 7 (528.16977ms)

                                                
                                                
-- stdout --
	multinode-337459
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-337459-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-337459-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 13:04:00.983042  973744 out.go:345] Setting OutFile to fd 1 ...
	I0217 13:04:00.983417  973744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:04:00.983432  973744 out.go:358] Setting ErrFile to fd 2...
	I0217 13:04:00.983439  973744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:04:00.983728  973744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
	I0217 13:04:00.983986  973744 out.go:352] Setting JSON to false
	I0217 13:04:00.984042  973744 mustload.go:65] Loading cluster: multinode-337459
	I0217 13:04:00.984143  973744 notify.go:220] Checking for updates...
	I0217 13:04:00.984506  973744 config.go:182] Loaded profile config "multinode-337459": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0217 13:04:00.984529  973744 status.go:174] checking status of multinode-337459 ...
	I0217 13:04:00.985465  973744 cli_runner.go:164] Run: docker container inspect multinode-337459 --format={{.State.Status}}
	I0217 13:04:01.007375  973744 status.go:371] multinode-337459 host status = "Running" (err=<nil>)
	I0217 13:04:01.007405  973744 host.go:66] Checking if "multinode-337459" exists ...
	I0217 13:04:01.007734  973744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-337459
	I0217 13:04:01.035635  973744 host.go:66] Checking if "multinode-337459" exists ...
	I0217 13:04:01.035974  973744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 13:04:01.036038  973744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-337459
	I0217 13:04:01.054519  973744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/multinode-337459/id_rsa Username:docker}
	I0217 13:04:01.145932  973744 ssh_runner.go:195] Run: systemctl --version
	I0217 13:04:01.151219  973744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 13:04:01.163955  973744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 13:04:01.221312  973744 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2025-02-17 13:04:01.211276514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 13:04:01.221983  973744 kubeconfig.go:125] found "multinode-337459" server: "https://192.168.67.2:8443"
	I0217 13:04:01.222013  973744 api_server.go:166] Checking apiserver status ...
	I0217 13:04:01.222062  973744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 13:04:01.234401  973744 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1400/cgroup
	I0217 13:04:01.244871  973744 api_server.go:182] apiserver freezer: "11:freezer:/docker/4e071946c9cf83fe4f941432aabc9afdce2c3927ccfa319a899a87e785cb3368/crio/crio-74f898596aee7d6525f016a8b7b4578b12edac0751abfeaeed2cbfc604149f3a"
	I0217 13:04:01.244944  973744 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4e071946c9cf83fe4f941432aabc9afdce2c3927ccfa319a899a87e785cb3368/crio/crio-74f898596aee7d6525f016a8b7b4578b12edac0751abfeaeed2cbfc604149f3a/freezer.state
	I0217 13:04:01.254719  973744 api_server.go:204] freezer state: "THAWED"
	I0217 13:04:01.254750  973744 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0217 13:04:01.264172  973744 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0217 13:04:01.264201  973744 status.go:463] multinode-337459 apiserver status = Running (err=<nil>)
	I0217 13:04:01.264212  973744 status.go:176] multinode-337459 status: &{Name:multinode-337459 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 13:04:01.264229  973744 status.go:174] checking status of multinode-337459-m02 ...
	I0217 13:04:01.264556  973744 cli_runner.go:164] Run: docker container inspect multinode-337459-m02 --format={{.State.Status}}
	I0217 13:04:01.283336  973744 status.go:371] multinode-337459-m02 host status = "Running" (err=<nil>)
	I0217 13:04:01.283363  973744 host.go:66] Checking if "multinode-337459-m02" exists ...
	I0217 13:04:01.283689  973744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-337459-m02
	I0217 13:04:01.301979  973744 host.go:66] Checking if "multinode-337459-m02" exists ...
	I0217 13:04:01.302327  973744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 13:04:01.302380  973744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-337459-m02
	I0217 13:04:01.323639  973744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/20427-855004/.minikube/machines/multinode-337459-m02/id_rsa Username:docker}
	I0217 13:04:01.417052  973744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 13:04:01.429906  973744 status.go:176] multinode-337459-m02 status: &{Name:multinode-337459-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0217 13:04:01.429947  973744 status.go:174] checking status of multinode-337459-m03 ...
	I0217 13:04:01.430270  973744 cli_runner.go:164] Run: docker container inspect multinode-337459-m03 --format={{.State.Status}}
	I0217 13:04:01.451635  973744 status.go:371] multinode-337459-m03 host status = "Stopped" (err=<nil>)
	I0217 13:04:01.451662  973744 status.go:384] host is not running, skipping remaining checks
	I0217 13:04:01.451670  973744 status.go:176] multinode-337459-m03 status: &{Name:multinode-337459-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-337459 node start m03 -v=7 --alsologtostderr: (9.282820123s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-337459
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-337459
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-337459: (24.719651953s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-337459 --wait=true -v=8 --alsologtostderr
E0217 13:05:02.125534  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-337459 --wait=true -v=8 --alsologtostderr: (1m5.118205365s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-337459
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-337459 node delete m03: (4.645350532s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-337459 stop: (23.646642029s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-337459 status: exit status 7 (109.969288ms)

                                                
                                                
-- stdout --
	multinode-337459
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-337459-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-337459 status --alsologtostderr: exit status 7 (108.539044ms)

                                                
                                                
-- stdout --
	multinode-337459
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-337459-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 13:06:10.622026  981182 out.go:345] Setting OutFile to fd 1 ...
	I0217 13:06:10.622200  981182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:06:10.622231  981182 out.go:358] Setting ErrFile to fd 2...
	I0217 13:06:10.622252  981182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:06:10.622528  981182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
	I0217 13:06:10.622738  981182 out.go:352] Setting JSON to false
	I0217 13:06:10.622822  981182 mustload.go:65] Loading cluster: multinode-337459
	I0217 13:06:10.622905  981182 notify.go:220] Checking for updates...
	I0217 13:06:10.623301  981182 config.go:182] Loaded profile config "multinode-337459": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0217 13:06:10.623346  981182 status.go:174] checking status of multinode-337459 ...
	I0217 13:06:10.623909  981182 cli_runner.go:164] Run: docker container inspect multinode-337459 --format={{.State.Status}}
	I0217 13:06:10.643959  981182 status.go:371] multinode-337459 host status = "Stopped" (err=<nil>)
	I0217 13:06:10.643983  981182 status.go:384] host is not running, skipping remaining checks
	I0217 13:06:10.643991  981182 status.go:176] multinode-337459 status: &{Name:multinode-337459 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 13:06:10.644023  981182 status.go:174] checking status of multinode-337459-m02 ...
	I0217 13:06:10.644692  981182 cli_runner.go:164] Run: docker container inspect multinode-337459-m02 --format={{.State.Status}}
	I0217 13:06:10.677562  981182 status.go:371] multinode-337459-m02 host status = "Stopped" (err=<nil>)
	I0217 13:06:10.677636  981182 status.go:384] host is not running, skipping remaining checks
	I0217 13:06:10.677660  981182 status.go:176] multinode-337459-m02 status: &{Name:multinode-337459-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-337459 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0217 13:06:26.653796  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-337459 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (46.01546685s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-337459 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-337459
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-337459-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-337459-m02 --driver=docker  --container-runtime=crio: exit status 14 (145.794102ms)

                                                
                                                
-- stdout --
	* [multinode-337459-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-337459-m02' is duplicated with machine name 'multinode-337459-m02' in profile 'multinode-337459'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-337459-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-337459-m03 --driver=docker  --container-runtime=crio: (30.706215089s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-337459
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-337459: exit status 80 (341.737571ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-337459 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-337459-m03 already exists in multinode-337459-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-337459-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-337459-m03: (1.983398194s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.28s)

                                                
                                    
x
+
TestPreload (125.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-758629 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0217 13:08:39.059198  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-758629 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.111284189s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-758629 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-758629 image pull gcr.io/k8s-minikube/busybox: (3.449676387s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-758629
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-758629: (5.810893712s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-758629 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-758629 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.292508376s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-758629 image list
helpers_test.go:175: Cleaning up "test-preload-758629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-758629
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-758629: (2.570788667s)
--- PASS: TestPreload (125.55s)

                                                
                                    
x
+
TestScheduledStopUnix (106.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-793884 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-793884 --memory=2048 --driver=docker  --container-runtime=crio: (30.350086314s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-793884 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-793884 -n scheduled-stop-793884
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-793884 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0217 13:10:11.292307  860382 retry.go:31] will retry after 104.001µs: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.292779  860382 retry.go:31] will retry after 207.568µs: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.293105  860382 retry.go:31] will retry after 174.255µs: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.293940  860382 retry.go:31] will retry after 500.375µs: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.295111  860382 retry.go:31] will retry after 423.423µs: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.296187  860382 retry.go:31] will retry after 578.111µs: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.297286  860382 retry.go:31] will retry after 1.1833ms: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.299467  860382 retry.go:31] will retry after 2.448313ms: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.302716  860382 retry.go:31] will retry after 1.357401ms: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.304884  860382 retry.go:31] will retry after 5.218194ms: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.312154  860382 retry.go:31] will retry after 7.992868ms: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.320874  860382 retry.go:31] will retry after 6.525139ms: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.328160  860382 retry.go:31] will retry after 18.180186ms: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.347409  860382 retry.go:31] will retry after 19.012025ms: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.366606  860382 retry.go:31] will retry after 32.011006ms: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
I0217 13:10:11.400602  860382 retry.go:31] will retry after 54.586514ms: open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/scheduled-stop-793884/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-793884 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-793884 -n scheduled-stop-793884
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-793884
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-793884 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-793884
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-793884: exit status 7 (73.268829ms)

                                                
                                                
-- stdout --
	scheduled-stop-793884
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-793884 -n scheduled-stop-793884
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-793884 -n scheduled-stop-793884: exit status 7 (71.534189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-793884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-793884
E0217 13:11:26.653059  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-793884: (4.462822268s)
--- PASS: TestScheduledStopUnix (106.44s)

                                                
                                    
x
+
TestInsufficientStorage (10.52s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-565901 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-565901 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.013025964s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3ed0864b-439f-487f-8d2f-d3dfd1a91ef3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-565901] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b1300b8-40c6-4a16-b682-dc1a179c7df7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20427"}}
	{"specversion":"1.0","id":"770bc8af-0e0e-45a8-a76d-f0ff3319b9dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5e1e6113-0f11-4255-836e-1f42ea9a861e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig"}}
	{"specversion":"1.0","id":"ebe58059-71da-4fe8-983c-a23e562eae61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube"}}
	{"specversion":"1.0","id":"2a9a0bd3-439d-448b-ac10-8ac959e70df8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c67949f9-37b2-4660-94d4-eb50e3f70fe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3e824600-08cb-4ac8-9c0b-0c8283688d3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f1844d08-a3c6-4b65-8269-218e916668cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"55a142d6-9593-489e-ae4c-e743de0da36b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4605903d-6c9a-4f48-b6b2-fdcd13cebe3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0c0f2eeb-e445-4597-bdb0-3019390f0308","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-565901\" primary control-plane node in \"insufficient-storage-565901\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"410fff15-7b09-491e-8208-6aa2e3b57896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1739182054-20387 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"051e61b7-60ab-47df-b5c6-35b384b9a7d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f74bcae2-8f60-472b-b749-3b6a8df4495b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-565901 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-565901 --output=json --layout=cluster: exit status 7 (293.201597ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-565901","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-565901","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0217 13:11:35.135516  998887 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-565901" does not appear in /home/jenkins/minikube-integration/20427-855004/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-565901 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-565901 --output=json --layout=cluster: exit status 7 (295.337826ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-565901","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-565901","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0217 13:11:35.432524  998952 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-565901" does not appear in /home/jenkins/minikube-integration/20427-855004/kubeconfig
	E0217 13:11:35.443533  998952 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/insufficient-storage-565901/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-565901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-565901
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-565901: (1.91890017s)
--- PASS: TestInsufficientStorage (10.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4143180452 start -p running-upgrade-839490 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4143180452 start -p running-upgrade-839490 --memory=2200 --vm-driver=docker  --container-runtime=crio: (47.725834292s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-839490 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-839490 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.975615758s)
helpers_test.go:175: Cleaning up "running-upgrade-839490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-839490
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-839490: (3.438753396s)
--- PASS: TestRunningBinaryUpgrade (77.78s)

                                                
                                    
x
+
TestKubernetesUpgrade (138s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-063543 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-063543 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m13.432295202s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-063543
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-063543: (1.877429111s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-063543 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-063543 status --format={{.Host}}: exit status 7 (170.668859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-063543 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-063543 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.415260637s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-063543 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-063543 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-063543 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (170.678786ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-063543] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-063543
	    minikube start -p kubernetes-upgrade-063543 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0635432 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-063543 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-063543 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-063543 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.334511312s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-063543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-063543
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-063543: (2.426368521s)
--- PASS: TestKubernetesUpgrade (138.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (163.12s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3623966438 start -p missing-upgrade-949759 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3623966438 start -p missing-upgrade-949759 --memory=2200 --driver=docker  --container-runtime=crio: (1m27.19874909s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-949759
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-949759: (10.487298791s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-949759
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-949759 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0217 13:13:39.059141  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-949759 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.848117933s)
helpers_test.go:175: Cleaning up "missing-upgrade-949759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-949759
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-949759: (2.89657024s)
--- PASS: TestMissingContainerUpgrade (163.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-438435 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-438435 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (102.863012ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-438435] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-438435 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-438435 --driver=docker  --container-runtime=crio: (38.601298316s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-438435 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-438435 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-438435 --no-kubernetes --driver=docker  --container-runtime=crio: (6.225666377s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-438435 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-438435 status -o json: exit status 2 (309.349888ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-438435","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-438435
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-438435: (2.109592017s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-438435 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-438435 --no-kubernetes --driver=docker  --container-runtime=crio: (11.527193845s)
--- PASS: TestNoKubernetes/serial/Start (11.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-438435 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-438435 "sudo systemctl is-active --quiet service kubelet": exit status 1 (369.308779ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-438435
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-438435: (1.262635191s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-438435 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-438435 --driver=docker  --container-runtime=crio: (7.430067704s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-438435 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-438435 "sudo systemctl is-active --quiet service kubelet": exit status 1 (441.605167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (82.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3795345069 start -p stopped-upgrade-448821 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0217 13:14:29.723977  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3795345069 start -p stopped-upgrade-448821 --memory=2200 --vm-driver=docker  --container-runtime=crio: (41.338311671s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3795345069 -p stopped-upgrade-448821 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3795345069 -p stopped-upgrade-448821 stop: (3.037764406s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-448821 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-448821 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.090121107s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (82.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-448821
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-448821: (1.382037739s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.38s)

                                                
                                    
x
+
TestPause/serial/Start (61.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-517819 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-517819 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m1.517909237s)
--- PASS: TestPause/serial/Start (61.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-517819 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-517819 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.747493325s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-649291 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-649291 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (195.169621ms)

                                                
                                                
-- stdout --
	* [false-649291] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 13:17:12.545617 1033497 out.go:345] Setting OutFile to fd 1 ...
	I0217 13:17:12.545826 1033497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:17:12.545854 1033497 out.go:358] Setting ErrFile to fd 2...
	I0217 13:17:12.545871 1033497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:17:12.546149 1033497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-855004/.minikube/bin
	I0217 13:17:12.546599 1033497 out.go:352] Setting JSON to false
	I0217 13:17:12.547637 1033497 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":21581,"bootTime":1739776652,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0217 13:17:12.547747 1033497 start.go:139] virtualization:  
	I0217 13:17:12.551648 1033497 out.go:177] * [false-649291] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0217 13:17:12.555462 1033497 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 13:17:12.555556 1033497 notify.go:220] Checking for updates...
	I0217 13:17:12.561772 1033497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 13:17:12.564874 1033497 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-855004/kubeconfig
	I0217 13:17:12.567728 1033497 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-855004/.minikube
	I0217 13:17:12.570518 1033497 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0217 13:17:12.573335 1033497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 13:17:12.576778 1033497 config.go:182] Loaded profile config "pause-517819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0217 13:17:12.576878 1033497 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 13:17:12.605030 1033497 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 13:17:12.605221 1033497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 13:17:12.666981 1033497 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-17 13:17:12.657345219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 13:17:12.667094 1033497 docker.go:318] overlay module found
	I0217 13:17:12.670162 1033497 out.go:177] * Using the docker driver based on user configuration
	I0217 13:17:12.673056 1033497 start.go:297] selected driver: docker
	I0217 13:17:12.673077 1033497 start.go:901] validating driver "docker" against <nil>
	I0217 13:17:12.673111 1033497 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 13:17:12.676626 1033497 out.go:201] 
	W0217 13:17:12.679497 1033497 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0217 13:17:12.682267 1033497 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-649291 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-649291" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20427-855004/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 17 Feb 2025 13:17:08 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-517819
contexts:
- context:
cluster: pause-517819
extensions:
- extension:
last-update: Mon, 17 Feb 2025 13:17:08 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-517819
name: pause-517819
current-context: pause-517819
kind: Config
preferences: {}
users:
- name: pause-517819
user:
client-certificate: /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/pause-517819/client.crt
client-key: /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/pause-517819/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-649291

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649291"

                                                
                                                
----------------------- debugLogs end: false-649291 [took: 4.6490979s] --------------------------------
helpers_test.go:175: Cleaning up "false-649291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-649291
--- PASS: TestNetworkPlugins/group/false (5.01s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-517819 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-517819 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-517819 --output=json --layout=cluster: exit status 2 (330.83993ms)

                                                
                                                
-- stdout --
	{"Name":"pause-517819","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-517819","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-517819 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.59s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-517819 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-517819 --alsologtostderr -v=5: (1.592362716s)
--- PASS: TestPause/serial/PauseAgain (1.59s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-517819 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-517819 --alsologtostderr -v=5: (2.971058848s)
--- PASS: TestPause/serial/DeletePaused (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-517819
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-517819: exit status 1 (21.88968ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-517819: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-793782 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-793782 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m42.292163341s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-793782 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2cf3d37f-2707-4363-bd4a-e3eb052c611b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2cf3d37f-2707-4363-bd4a-e3eb052c611b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.002603237s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-793782 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-793782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-793782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.342327589s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-793782 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-017025 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-017025 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (1m13.288832965s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-793782 --alsologtostderr -v=3
E0217 13:21:42.127676  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-793782 --alsologtostderr -v=3: (13.062530768s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-793782 -n old-k8s-version-793782
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-793782 -n old-k8s-version-793782: exit status 7 (124.760233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-793782 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (306.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-793782 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-793782 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (5m6.531030147s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-793782 -n old-k8s-version-793782
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (306.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (264.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-017025 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [071fb34f-fb6c-40e9-b3ac-8ba8af641588] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0217 13:23:39.058305  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:26:26.653675  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [071fb34f-fb6c-40e9-b3ac-8ba8af641588] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 4m24.003436738s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-017025 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (264.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gf8vg" [40f3a2ea-8ffe-4647-b3e6-34f13edbe182] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004054498s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gf8vg" [40f3a2ea-8ffe-4647-b3e6-34f13edbe182] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003134073s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-793782 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-793782 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-793782 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-793782 -n old-k8s-version-793782
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-793782 -n old-k8s-version-793782: exit status 2 (324.444021ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-793782 -n old-k8s-version-793782
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-793782 -n old-k8s-version-793782: exit status 2 (335.535757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-793782 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-793782 -n old-k8s-version-793782
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-793782 -n old-k8s-version-793782
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-017025 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-017025 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.283619058s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-017025 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-930216 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-930216 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (55.428506087s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-017025 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-017025 --alsologtostderr -v=3: (12.201442834s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-017025 -n no-preload-017025
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-017025 -n no-preload-017025: exit status 7 (98.019729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-017025 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (303.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-017025 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-017025 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (5m2.81007432s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-017025 -n no-preload-017025
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (303.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-930216 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f535bfe8-a531-416e-b324-c1052dcac6e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f535bfe8-a531-416e-b324-c1052dcac6e5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004653356s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-930216 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-930216 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-930216 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037017268s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-930216 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-930216 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-930216 --alsologtostderr -v=3: (11.960205746s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-930216 -n embed-certs-930216
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-930216 -n embed-certs-930216: exit status 7 (76.653713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-930216 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (274.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-930216 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0217 13:28:39.059162  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:09.725646  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:26.653087  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:28.791437  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:28.797790  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:28.809134  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:28.830570  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:28.872049  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:28.953527  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:29.115158  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:29.436607  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:30.078300  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:31.360288  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:33.922280  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:39.044439  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:31:49.286547  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:09.768054  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-930216 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m33.587817866s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-930216 -n embed-certs-930216
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (274.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lbdxf" [61d3439b-fe9d-44e2-9c55-de19365d6c50] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003909017s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lbdxf" [61d3439b-fe9d-44e2-9c55-de19365d6c50] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004274319s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-017025 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-017025 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-017025 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-017025 -n no-preload-017025
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-017025 -n no-preload-017025: exit status 2 (324.726206ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-017025 -n no-preload-017025
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-017025 -n no-preload-017025: exit status 2 (339.540102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-017025 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-017025 -n no-preload-017025
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-017025 -n no-preload-017025
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-911857 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0217 13:32:53.312618  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:53.319406  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:53.330845  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:53.352321  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:53.393787  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:53.476858  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:53.638269  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:53.959642  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:54.601060  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:55.883217  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:32:58.444765  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:33:03.566335  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-911857 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (54.515676693s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-r72cz" [0b87f68a-2fc9-44d1-b781-6cb305de635a] Running
E0217 13:33:13.808028  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004641972s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-r72cz" [0b87f68a-2fc9-44d1-b781-6cb305de635a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003549359s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-930216 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-930216 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-930216 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-930216 --alsologtostderr -v=1: (1.015272043s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-930216 -n embed-certs-930216
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-930216 -n embed-certs-930216: exit status 2 (408.742845ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-930216 -n embed-certs-930216
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-930216 -n embed-certs-930216: exit status 2 (378.411072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-930216 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-930216 -n embed-certs-930216
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-930216 -n embed-certs-930216
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-607110 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0217 13:33:34.289338  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:33:39.058309  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-607110 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (40.045317694s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-911857 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cc17b47a-f971-4b49-82b8-a3be2e82a8cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cc17b47a-f971-4b49-82b8-a3be2e82a8cd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004539675s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-911857 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-911857 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-911857 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.344922919s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-911857 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-911857 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-911857 --alsologtostderr -v=3: (12.426260088s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-911857 -n default-k8s-diff-port-911857
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-911857 -n default-k8s-diff-port-911857: exit status 7 (131.866172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-911857 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (293.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-911857 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-911857 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m53.147519314s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-911857 -n default-k8s-diff-port-911857
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (293.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-607110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0217 13:34:12.651646  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-607110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.768119907s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-607110 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-607110 --alsologtostderr -v=3: (1.276311118s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-607110 -n newest-cni-607110
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-607110 -n newest-cni-607110: exit status 7 (89.361301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-607110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-607110 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0217 13:34:15.251614  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-607110 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (21.753196054s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-607110 -n newest-cni-607110
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-607110 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-607110 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-607110 --alsologtostderr -v=1: (1.639014538s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-607110 -n newest-cni-607110
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-607110 -n newest-cni-607110: exit status 2 (425.464831ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-607110 -n newest-cni-607110
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-607110 -n newest-cni-607110: exit status 2 (425.984301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-607110 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-607110 -n newest-cni-607110
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-607110 -n newest-cni-607110
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (50.046205385s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-649291 "pgrep -a kubelet"
I0217 13:35:35.777873  860382 config.go:182] Loaded profile config "auto-649291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-649291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-24mq6" [4ee607bd-08a1-4599-bc80-4e8fae38bdee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0217 13:35:37.173463  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-24mq6" [4ee607bd-08a1-4599-bc80-4e8fae38bdee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004808981s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-649291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0217 13:36:26.653122  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/addons-925274/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:36:28.791956  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:36:56.493476  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (53.534442876s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gpqvh" [901f4d04-6a40-4725-973d-58b5dbdc14f0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.002962926s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-649291 "pgrep -a kubelet"
I0217 13:37:06.882143  860382 config.go:182] Loaded profile config "kindnet-649291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-649291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dhnpk" [3812207d-4ff7-45c2-98e1-73c05d6db1a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dhnpk" [3812207d-4ff7-45c2-98e1-73c05d6db1a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00359929s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-649291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (65.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0217 13:37:53.312395  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:38:21.015399  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:38:22.129116  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:38:39.058617  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/functional-935264/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m5.510972912s)
--- PASS: TestNetworkPlugins/group/calico/Start (65.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fdx8b" [da7f29e0-a0b3-4fe4-b09b-fdb3f4f19f08] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008329475s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-649291 "pgrep -a kubelet"
I0217 13:38:51.319991  860382 config.go:182] Loaded profile config "calico-649291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-649291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8kbs4" [9ccda1ef-aeb6-42da-8b9c-97e3caecdfab] Pending
helpers_test.go:344: "netcat-5d86dc444-8kbs4" [9ccda1ef-aeb6-42da-8b9c-97e3caecdfab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.002846871s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-dqx8l" [537385ec-85c5-4588-b73c-c5be2bf10d77] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00288121s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-649291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-dqx8l" [537385ec-85c5-4588-b73c-c5be2bf10d77] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003823928s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-911857 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-911857 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-911857 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-911857 --alsologtostderr -v=1: (1.179466131s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-911857 -n default-k8s-diff-port-911857
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-911857 -n default-k8s-diff-port-911857: exit status 2 (487.748317ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-911857 -n default-k8s-diff-port-911857
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-911857 -n default-k8s-diff-port-911857: exit status 2 (399.801405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-911857 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-911857 --alsologtostderr -v=1: (1.028384798s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-911857 -n default-k8s-diff-port-911857
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-911857 -n default-k8s-diff-port-911857
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.313957732s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.953076996s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-649291 "pgrep -a kubelet"
I0217 13:40:27.251523  860382 config.go:182] Loaded profile config "custom-flannel-649291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-649291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gsdfk" [672418ef-c081-41ec-a7af-76a342e6789a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gsdfk" [672418ef-c081-41ec-a7af-76a342e6789a] Running
E0217 13:40:36.041492  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:40:36.047848  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:40:36.059225  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:40:36.080826  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:40:36.122337  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:40:36.203814  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:40:36.365427  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:40:36.687699  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:40:37.329580  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.002947885s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-649291 exec deployment/netcat -- nslookup kubernetes.default
E0217 13:40:38.611242  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-649291 "pgrep -a kubelet"
I0217 13:40:50.251088  860382 config.go:182] Loaded profile config "enable-default-cni-649291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-649291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tqwx9" [924dd5ff-3ee4-4306-9742-75e0941d852c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0217 13:40:56.537757  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-tqwx9" [924dd5ff-3ee4-4306-9742-75e0941d852c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003869748s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.575237263s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-649291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0217 13:41:28.791132  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/old-k8s-version-793782/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:41:57.981313  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/auto-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:00.588206  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:00.594519  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:00.605875  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:00.627376  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:00.668755  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:00.750207  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:00.911584  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:01.233232  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:01.875197  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:03.156556  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-649291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m14.674856107s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mmcsn" [5da23d25-e338-4a34-94a0-d069cae5a466] Running
E0217 13:42:05.718853  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:42:10.840966  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003123888s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-649291 "pgrep -a kubelet"
I0217 13:42:11.961270  860382 config.go:182] Loaded profile config "flannel-649291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-649291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5q49r" [d712bd7c-29a6-4e9a-b43b-27640a309551] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5q49r" [d712bd7c-29a6-4e9a-b43b-27640a309551] Running
E0217 13:42:21.082391  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/kindnet-649291/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003012663s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-649291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-649291 "pgrep -a kubelet"
I0217 13:42:42.515767  860382 config.go:182] Loaded profile config "bridge-649291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-649291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hthw5" [18ea6169-781e-4191-b8c9-829d0618e159] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hthw5" [18ea6169-781e-4191-b8c9-829d0618e159] Running
E0217 13:42:53.311884  860382 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/no-preload-017025/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.021090195s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-649291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-649291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (32/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-432282 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-432282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-432282
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-925274 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-237454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-237454
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-649291 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-649291" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20427-855004/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 17 Feb 2025 13:17:08 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-517819
contexts:
- context:
cluster: pause-517819
extensions:
- extension:
last-update: Mon, 17 Feb 2025 13:17:08 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-517819
name: pause-517819
current-context: pause-517819
kind: Config
preferences: {}
users:
- name: pause-517819
user:
client-certificate: /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/pause-517819/client.crt
client-key: /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/pause-517819/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-649291

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649291"

                                                
                                                
----------------------- debugLogs end: kubenet-649291 [took: 3.667402136s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-649291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-649291
--- SKIP: TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-649291 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-649291" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20427-855004/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 17 Feb 2025 13:17:08 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-517819
contexts:
- context:
cluster: pause-517819
extensions:
- extension:
last-update: Mon, 17 Feb 2025 13:17:08 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-517819
name: pause-517819
current-context: pause-517819
kind: Config
preferences: {}
users:
- name: pause-517819
user:
client-certificate: /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/pause-517819/client.crt
client-key: /home/jenkins/minikube-integration/20427-855004/.minikube/profiles/pause-517819/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-649291

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-649291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649291"

                                                
                                                
----------------------- debugLogs end: cilium-649291 [took: 5.133618073s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-649291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-649291
--- SKIP: TestNetworkPlugins/group/cilium (5.30s)

                                                
                                    
Copied to clipboard