Test Report: Docker_Linux_crio_arm64 20201

                    
                      6e9b07ade0635411356e4d21ba1a5eb0b1199aec:2025-04-14:39140
                    
                

Test fail (1/331)

Order failed test Duration
36 TestAddons/parallel/Ingress 151.29
x
+
TestAddons/parallel/Ingress (151.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-225375 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-225375 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-225375 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f6b43b94-3567-4f2d-8579-e86e3cedbe61] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f6b43b94-3567-4f2d-8579-e86e3cedbe61] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003665573s
I0414 17:34:40.450971  463312 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-225375 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.530062814s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-225375 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-225375
helpers_test.go:235: (dbg) docker inspect addons-225375:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bc29855cba0362dc60248bdd155272d4f278831d6cbbcf4dacc69f89eca4d473",
	        "Created": "2025-04-14T17:31:26.779480598Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 464471,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-14T17:31:26.842789047Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e51065ad0661308920dfd7c7ddda445e530a6bf56321f8317cb47e1df0975e7c",
	        "ResolvConfPath": "/var/lib/docker/containers/bc29855cba0362dc60248bdd155272d4f278831d6cbbcf4dacc69f89eca4d473/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bc29855cba0362dc60248bdd155272d4f278831d6cbbcf4dacc69f89eca4d473/hostname",
	        "HostsPath": "/var/lib/docker/containers/bc29855cba0362dc60248bdd155272d4f278831d6cbbcf4dacc69f89eca4d473/hosts",
	        "LogPath": "/var/lib/docker/containers/bc29855cba0362dc60248bdd155272d4f278831d6cbbcf4dacc69f89eca4d473/bc29855cba0362dc60248bdd155272d4f278831d6cbbcf4dacc69f89eca4d473-json.log",
	        "Name": "/addons-225375",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-225375:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-225375",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bc29855cba0362dc60248bdd155272d4f278831d6cbbcf4dacc69f89eca4d473",
	                "LowerDir": "/var/lib/docker/overlay2/490ffade22643236e7cb43761fa6bd8ed90da019074f4280fb341733afe74194-init/diff:/var/lib/docker/overlay2/c4f1be13b35dc9e3a7065f5523e670871640b2ac90cf774c3c66d2ad49ab233c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/490ffade22643236e7cb43761fa6bd8ed90da019074f4280fb341733afe74194/merged",
	                "UpperDir": "/var/lib/docker/overlay2/490ffade22643236e7cb43761fa6bd8ed90da019074f4280fb341733afe74194/diff",
	                "WorkDir": "/var/lib/docker/overlay2/490ffade22643236e7cb43761fa6bd8ed90da019074f4280fb341733afe74194/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-225375",
	                "Source": "/var/lib/docker/volumes/addons-225375/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-225375",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-225375",
	                "name.minikube.sigs.k8s.io": "addons-225375",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "694dab442ba30530a7966007dfcb3bfbaf408ce5b916f4fa39793b1e64a2291c",
	            "SandboxKey": "/var/run/docker/netns/694dab442ba3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-225375": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:b1:20:24:df:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d80a9d131cb840be15d59c5d5192881041608095731bf8b40a46c938fcb9fbe9",
	                    "EndpointID": "7c27f818a05b6fac70efbc5535a3c28a73478c503ed4b05afa730817440ecdff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-225375",
	                        "bc29855cba03"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-225375 -n addons-225375
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-225375 logs -n 25: (1.518830358s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-935890                                                                     | download-only-935890   | jenkins | v1.35.0 | 14 Apr 25 17:31 UTC | 14 Apr 25 17:31 UTC |
	| start   | --download-only -p                                                                          | download-docker-817347 | jenkins | v1.35.0 | 14 Apr 25 17:31 UTC |                     |
	|         | download-docker-817347                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-817347                                                                   | download-docker-817347 | jenkins | v1.35.0 | 14 Apr 25 17:31 UTC | 14 Apr 25 17:31 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-047608   | jenkins | v1.35.0 | 14 Apr 25 17:31 UTC |                     |
	|         | binary-mirror-047608                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42079                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-047608                                                                     | binary-mirror-047608   | jenkins | v1.35.0 | 14 Apr 25 17:31 UTC | 14 Apr 25 17:31 UTC |
	| addons  | disable dashboard -p                                                                        | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:31 UTC |                     |
	|         | addons-225375                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:31 UTC |                     |
	|         | addons-225375                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-225375 --wait=true                                                                | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:31 UTC | 14 Apr 25 17:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-225375 addons disable                                                                | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:33 UTC | 14 Apr 25 17:33 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-225375 addons disable                                                                | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:33 UTC | 14 Apr 25 17:33 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:33 UTC | 14 Apr 25 17:33 UTC |
	|         | -p addons-225375                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-225375 addons disable                                                                | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:34 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-225375 ip                                                                            | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:34 UTC |
	| addons  | addons-225375 addons disable                                                                | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-225375 addons                                                                        | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:34 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-225375 addons                                                                        | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:34 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-225375 ssh curl -s                                                                   | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-225375 addons                                                                        | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:34 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-225375 addons                                                                        | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-225375 addons                                                                        | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:35 UTC | 14 Apr 25 17:35 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-225375 addons disable                                                                | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:35 UTC | 14 Apr 25 17:35 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-225375 ssh cat                                                                       | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:35 UTC | 14 Apr 25 17:35 UTC |
	|         | /opt/local-path-provisioner/pvc-85319c25-c053-4685-8a1f-c7a523e159f2_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-225375 addons disable                                                                | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:35 UTC | 14 Apr 25 17:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-225375 addons                                                                        | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:36 UTC | 14 Apr 25 17:36 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-225375 ip                                                                            | addons-225375          | jenkins | v1.35.0 | 14 Apr 25 17:36 UTC | 14 Apr 25 17:36 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 17:31:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 17:31:01.650155  464072 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:31:01.650374  464072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:31:01.650386  464072 out.go:358] Setting ErrFile to fd 2...
	I0414 17:31:01.650392  464072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:31:01.650682  464072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
	I0414 17:31:01.651184  464072 out.go:352] Setting JSON to false
	I0414 17:31:01.652115  464072 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8008,"bootTime":1744643854,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0414 17:31:01.652187  464072 start.go:139] virtualization:  
	I0414 17:31:01.655581  464072 out.go:177] * [addons-225375] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0414 17:31:01.658490  464072 out.go:177]   - MINIKUBE_LOCATION=20201
	I0414 17:31:01.658627  464072 notify.go:220] Checking for updates...
	I0414 17:31:01.664730  464072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:31:01.667709  464072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	I0414 17:31:01.670656  464072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	I0414 17:31:01.673651  464072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0414 17:31:01.676635  464072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:31:01.679643  464072 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:31:01.702888  464072 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 17:31:01.703031  464072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 17:31:01.772361  464072 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-04-14 17:31:01.762288784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 17:31:01.772495  464072 docker.go:318] overlay module found
	I0414 17:31:01.775990  464072 out.go:177] * Using the docker driver based on user configuration
	I0414 17:31:01.778879  464072 start.go:297] selected driver: docker
	I0414 17:31:01.778908  464072 start.go:901] validating driver "docker" against <nil>
	I0414 17:31:01.778924  464072 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:31:01.779654  464072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 17:31:01.841525  464072 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-04-14 17:31:01.831929801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 17:31:01.841690  464072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 17:31:01.841936  464072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:31:01.844849  464072 out.go:177] * Using Docker driver with root privileges
	I0414 17:31:01.847682  464072 cni.go:84] Creating CNI manager for ""
	I0414 17:31:01.847757  464072 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0414 17:31:01.847772  464072 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 17:31:01.847856  464072 start.go:340] cluster config:
	{Name:addons-225375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-225375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:31:01.852698  464072 out.go:177] * Starting "addons-225375" primary control-plane node in "addons-225375" cluster
	I0414 17:31:01.855534  464072 cache.go:121] Beginning downloading kic base image for docker with crio
	I0414 17:31:01.858487  464072 out.go:177] * Pulling base image v0.0.46-1744107393-20604 ...
	I0414 17:31:01.861396  464072 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:31:01.861468  464072 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
	I0414 17:31:01.861482  464072 cache.go:56] Caching tarball of preloaded images
	I0414 17:31:01.861489  464072 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon
	I0414 17:31:01.861576  464072 preload.go:172] Found /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0414 17:31:01.861586  464072 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 17:31:01.861916  464072 profile.go:143] Saving config to /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/config.json ...
	I0414 17:31:01.861947  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/config.json: {Name:mk20b02477127cf12b3c472555489a57b2778aef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:01.877916  464072 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a to local cache
	I0414 17:31:01.878063  464072 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local cache directory
	I0414 17:31:01.878083  464072 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local cache directory, skipping pull
	I0414 17:31:01.878088  464072 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a exists in cache, skipping pull
	I0414 17:31:01.878095  464072 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a as a tarball
	I0414 17:31:01.878100  464072 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a from local cache
	I0414 17:31:19.281246  464072 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a from cached tarball
	I0414 17:31:19.281282  464072 cache.go:230] Successfully downloaded all kic artifacts
	I0414 17:31:19.281309  464072 start.go:360] acquireMachinesLock for addons-225375: {Name:mk8e7cf66d7a201aa1ff9f7d4f98371b4635c1d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:31:19.281415  464072 start.go:364] duration metric: took 85.17µs to acquireMachinesLock for "addons-225375"
	I0414 17:31:19.281443  464072 start.go:93] Provisioning new machine with config: &{Name:addons-225375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-225375 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:31:19.281508  464072 start.go:125] createHost starting for "" (driver="docker")
	I0414 17:31:19.284929  464072 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0414 17:31:19.285189  464072 start.go:159] libmachine.API.Create for "addons-225375" (driver="docker")
	I0414 17:31:19.285227  464072 client.go:168] LocalClient.Create starting
	I0414 17:31:19.285342  464072 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20201-457936/.minikube/certs/ca.pem
	I0414 17:31:19.660855  464072 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20201-457936/.minikube/certs/cert.pem
	I0414 17:31:20.325637  464072 cli_runner.go:164] Run: docker network inspect addons-225375 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0414 17:31:20.341317  464072 cli_runner.go:211] docker network inspect addons-225375 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0414 17:31:20.341425  464072 network_create.go:284] running [docker network inspect addons-225375] to gather additional debugging logs...
	I0414 17:31:20.341446  464072 cli_runner.go:164] Run: docker network inspect addons-225375
	W0414 17:31:20.357365  464072 cli_runner.go:211] docker network inspect addons-225375 returned with exit code 1
	I0414 17:31:20.357402  464072 network_create.go:287] error running [docker network inspect addons-225375]: docker network inspect addons-225375: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-225375 not found
	I0414 17:31:20.357430  464072 network_create.go:289] output of [docker network inspect addons-225375]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-225375 not found
	
	** /stderr **
	I0414 17:31:20.357543  464072 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0414 17:31:20.373321  464072 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ba8350}
	I0414 17:31:20.373360  464072 network_create.go:124] attempt to create docker network addons-225375 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0414 17:31:20.373413  464072 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-225375 addons-225375
	I0414 17:31:20.433706  464072 network_create.go:108] docker network addons-225375 192.168.49.0/24 created
	I0414 17:31:20.433737  464072 kic.go:121] calculated static IP "192.168.49.2" for the "addons-225375" container
	I0414 17:31:20.433820  464072 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0414 17:31:20.449074  464072 cli_runner.go:164] Run: docker volume create addons-225375 --label name.minikube.sigs.k8s.io=addons-225375 --label created_by.minikube.sigs.k8s.io=true
	I0414 17:31:20.466480  464072 oci.go:103] Successfully created a docker volume addons-225375
	I0414 17:31:20.466578  464072 cli_runner.go:164] Run: docker run --rm --name addons-225375-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-225375 --entrypoint /usr/bin/test -v addons-225375:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -d /var/lib
	I0414 17:31:22.494461  464072 cli_runner.go:217] Completed: docker run --rm --name addons-225375-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-225375 --entrypoint /usr/bin/test -v addons-225375:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -d /var/lib: (2.0278414s)
	I0414 17:31:22.494488  464072 oci.go:107] Successfully prepared a docker volume addons-225375
	I0414 17:31:22.494534  464072 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:31:22.494557  464072 kic.go:194] Starting extracting preloaded images to volume ...
	I0414 17:31:22.494624  464072 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-225375:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -I lz4 -xf /preloaded.tar -C /extractDir
	I0414 17:31:26.713461  464072 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-225375:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -I lz4 -xf /preloaded.tar -C /extractDir: (4.21879536s)
	I0414 17:31:26.713491  464072 kic.go:203] duration metric: took 4.218930409s to extract preloaded images to volume ...
	W0414 17:31:26.713635  464072 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0414 17:31:26.713752  464072 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0414 17:31:26.765175  464072 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-225375 --name addons-225375 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-225375 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-225375 --network addons-225375 --ip 192.168.49.2 --volume addons-225375:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a
	I0414 17:31:27.080565  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Running}}
	I0414 17:31:27.103957  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:27.128774  464072 cli_runner.go:164] Run: docker exec addons-225375 stat /var/lib/dpkg/alternatives/iptables
	I0414 17:31:27.183191  464072 oci.go:144] the created container "addons-225375" has a running status.
	I0414 17:31:27.183227  464072 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa...
	I0414 17:31:27.598007  464072 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0414 17:31:27.627906  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:27.650221  464072 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0414 17:31:27.650241  464072 kic_runner.go:114] Args: [docker exec --privileged addons-225375 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0414 17:31:27.706526  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:27.738642  464072 machine.go:93] provisionDockerMachine start ...
	I0414 17:31:27.738746  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:27.760410  464072 main.go:141] libmachine: Using SSH client type: native
	I0414 17:31:27.760735  464072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I0414 17:31:27.760746  464072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:31:27.902116  464072 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-225375
	
	I0414 17:31:27.902139  464072 ubuntu.go:169] provisioning hostname "addons-225375"
	I0414 17:31:27.902208  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:27.921729  464072 main.go:141] libmachine: Using SSH client type: native
	I0414 17:31:27.922050  464072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I0414 17:31:27.922062  464072 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-225375 && echo "addons-225375" | sudo tee /etc/hostname
	I0414 17:31:28.069038  464072 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-225375
	
	I0414 17:31:28.069125  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:28.088239  464072 main.go:141] libmachine: Using SSH client type: native
	I0414 17:31:28.088594  464072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I0414 17:31:28.088619  464072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-225375' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-225375/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-225375' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:31:28.219446  464072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:31:28.219470  464072 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20201-457936/.minikube CaCertPath:/home/jenkins/minikube-integration/20201-457936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20201-457936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20201-457936/.minikube}
	I0414 17:31:28.219497  464072 ubuntu.go:177] setting up certificates
	I0414 17:31:28.219506  464072 provision.go:84] configureAuth start
	I0414 17:31:28.219565  464072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-225375
	I0414 17:31:28.239100  464072 provision.go:143] copyHostCerts
	I0414 17:31:28.239178  464072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20201-457936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20201-457936/.minikube/ca.pem (1082 bytes)
	I0414 17:31:28.239327  464072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20201-457936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20201-457936/.minikube/cert.pem (1123 bytes)
	I0414 17:31:28.239406  464072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20201-457936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20201-457936/.minikube/key.pem (1679 bytes)
	I0414 17:31:28.239482  464072 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20201-457936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20201-457936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20201-457936/.minikube/certs/ca-key.pem org=jenkins.addons-225375 san=[127.0.0.1 192.168.49.2 addons-225375 localhost minikube]
	I0414 17:31:28.972583  464072 provision.go:177] copyRemoteCerts
	I0414 17:31:28.972652  464072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:31:28.972693  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:28.989572  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:29.078964  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:31:29.102087  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 17:31:29.125761  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 17:31:29.150400  464072 provision.go:87] duration metric: took 930.798287ms to configureAuth
	I0414 17:31:29.150429  464072 ubuntu.go:193] setting minikube options for container-runtime
	I0414 17:31:29.150614  464072 config.go:182] Loaded profile config "addons-225375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:31:29.150746  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:29.167045  464072 main.go:141] libmachine: Using SSH client type: native
	I0414 17:31:29.167347  464072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I0414 17:31:29.167367  464072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:31:29.388545  464072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:31:29.388571  464072 machine.go:96] duration metric: took 1.649909147s to provisionDockerMachine
	I0414 17:31:29.388582  464072 client.go:171] duration metric: took 10.103345462s to LocalClient.Create
	I0414 17:31:29.388595  464072 start.go:167] duration metric: took 10.103408215s to libmachine.API.Create "addons-225375"
	I0414 17:31:29.388602  464072 start.go:293] postStartSetup for "addons-225375" (driver="docker")
	I0414 17:31:29.388613  464072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:31:29.388678  464072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:31:29.388721  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:29.408311  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:29.503348  464072 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:31:29.506392  464072 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0414 17:31:29.506428  464072 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0414 17:31:29.506440  464072 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0414 17:31:29.506448  464072 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0414 17:31:29.506459  464072 filesync.go:126] Scanning /home/jenkins/minikube-integration/20201-457936/.minikube/addons for local assets ...
	I0414 17:31:29.506533  464072 filesync.go:126] Scanning /home/jenkins/minikube-integration/20201-457936/.minikube/files for local assets ...
	I0414 17:31:29.506557  464072 start.go:296] duration metric: took 117.9487ms for postStartSetup
	I0414 17:31:29.506888  464072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-225375
	I0414 17:31:29.523194  464072 profile.go:143] Saving config to /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/config.json ...
	I0414 17:31:29.523472  464072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 17:31:29.523527  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:29.539436  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:29.631078  464072 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0414 17:31:29.635635  464072 start.go:128] duration metric: took 10.354110317s to createHost
	I0414 17:31:29.635661  464072 start.go:83] releasing machines lock for "addons-225375", held for 10.354234305s
	I0414 17:31:29.635743  464072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-225375
	I0414 17:31:29.651705  464072 ssh_runner.go:195] Run: cat /version.json
	I0414 17:31:29.651732  464072 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:31:29.651757  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:29.651797  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:29.675904  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:29.678452  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:29.761782  464072 ssh_runner.go:195] Run: systemctl --version
	I0414 17:31:29.890625  464072 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:31:30.047108  464072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 17:31:30.052087  464072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:31:30.075617  464072 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0414 17:31:30.075701  464072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:31:30.113196  464072 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0414 17:31:30.113221  464072 start.go:495] detecting cgroup driver to use...
	I0414 17:31:30.113259  464072 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0414 17:31:30.113313  464072 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:31:30.131063  464072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:31:30.143442  464072 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:31:30.143538  464072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:31:30.158716  464072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:31:30.173851  464072 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:31:30.265244  464072 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:31:30.352047  464072 docker.go:233] disabling docker service ...
	I0414 17:31:30.352116  464072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:31:30.372854  464072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:31:30.385111  464072 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:31:30.473008  464072 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:31:30.560669  464072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:31:30.572723  464072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:31:30.590264  464072 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 17:31:30.590357  464072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:31:30.600824  464072 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:31:30.600938  464072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:31:30.611914  464072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:31:30.622358  464072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:31:30.632897  464072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:31:30.642751  464072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:31:30.652473  464072 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:31:30.668525  464072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:31:30.678135  464072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:31:30.686594  464072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:31:30.694743  464072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:31:30.781684  464072 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:31:30.901790  464072 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:31:30.901877  464072 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:31:30.905670  464072 start.go:563] Will wait 60s for crictl version
	I0414 17:31:30.905733  464072 ssh_runner.go:195] Run: which crictl
	I0414 17:31:30.909007  464072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:31:30.944170  464072 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0414 17:31:30.944279  464072 ssh_runner.go:195] Run: crio --version
	I0414 17:31:30.983378  464072 ssh_runner.go:195] Run: crio --version
	I0414 17:31:31.026398  464072 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0414 17:31:31.029290  464072 cli_runner.go:164] Run: docker network inspect addons-225375 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0414 17:31:31.044907  464072 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0414 17:31:31.048555  464072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:31:31.059281  464072 kubeadm.go:883] updating cluster {Name:addons-225375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-225375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:31:31.059409  464072 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:31:31.059475  464072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:31:31.140598  464072 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:31:31.140621  464072 crio.go:433] Images already preloaded, skipping extraction
	I0414 17:31:31.140678  464072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:31:31.181910  464072 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:31:31.181932  464072 cache_images.go:84] Images are preloaded, skipping loading
	I0414 17:31:31.181940  464072 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 crio true true} ...
	I0414 17:31:31.182030  464072 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-225375 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-225375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:31:31.182111  464072 ssh_runner.go:195] Run: crio config
	I0414 17:31:31.229789  464072 cni.go:84] Creating CNI manager for ""
	I0414 17:31:31.229813  464072 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0414 17:31:31.229825  464072 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:31:31.229857  464072 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-225375 NodeName:addons-225375 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 17:31:31.230005  464072 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-225375"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:31:31.230089  464072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 17:31:31.238985  464072 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:31:31.239053  464072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:31:31.247716  464072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0414 17:31:31.265248  464072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:31:31.282608  464072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0414 17:31:31.299711  464072 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0414 17:31:31.302996  464072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:31:31.313420  464072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:31:31.409999  464072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:31:31.423428  464072 certs.go:68] Setting up /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375 for IP: 192.168.49.2
	I0414 17:31:31.423451  464072 certs.go:194] generating shared ca certs ...
	I0414 17:31:31.423468  464072 certs.go:226] acquiring lock for ca certs: {Name:mkbb8624379f5963485c1057be1cddcbd7ad1e9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:31.423596  464072 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20201-457936/.minikube/ca.key
	I0414 17:31:31.606134  464072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20201-457936/.minikube/ca.crt ...
	I0414 17:31:31.606164  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/ca.crt: {Name:mk71486328d403cf014e4c3957d7689dfe1066bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:31.607087  464072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20201-457936/.minikube/ca.key ...
	I0414 17:31:31.607106  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/ca.key: {Name:mkfcb9cfa24005ce0c41bc2b79afc04ccf477006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:31.607197  464072 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20201-457936/.minikube/proxy-client-ca.key
	I0414 17:31:32.396893  464072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20201-457936/.minikube/proxy-client-ca.crt ...
	I0414 17:31:32.396931  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/proxy-client-ca.crt: {Name:mkf4597a0a6d6e725e0c53aa93c56867e0a29abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:32.397211  464072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20201-457936/.minikube/proxy-client-ca.key ...
	I0414 17:31:32.397229  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/proxy-client-ca.key: {Name:mkd92134c9be81cae189180113018e17eb710fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:32.397355  464072 certs.go:256] generating profile certs ...
	I0414 17:31:32.397430  464072 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.key
	I0414 17:31:32.397462  464072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt with IP's: []
	I0414 17:31:32.566620  464072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt ...
	I0414 17:31:32.566652  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: {Name:mk8e8c36c812a9d19824b2273b33dc54f1edce76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:32.566846  464072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.key ...
	I0414 17:31:32.566860  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.key: {Name:mkca4f0baefcc5c5c2399d13af70b8c2c76ee196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:32.566949  464072 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.key.1d46ba1f
	I0414 17:31:32.566971  464072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.crt.1d46ba1f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0414 17:31:33.562881  464072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.crt.1d46ba1f ...
	I0414 17:31:33.562914  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.crt.1d46ba1f: {Name:mkf7ec3ee4f2e55e841360c76fbb4e684327d890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:33.563762  464072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.key.1d46ba1f ...
	I0414 17:31:33.563781  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.key.1d46ba1f: {Name:mk789f8bed584cf5039e722268985fde1da74f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:33.563875  464072 certs.go:381] copying /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.crt.1d46ba1f -> /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.crt
	I0414 17:31:33.563967  464072 certs.go:385] copying /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.key.1d46ba1f -> /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.key
	I0414 17:31:33.564021  464072 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/proxy-client.key
	I0414 17:31:33.564041  464072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/proxy-client.crt with IP's: []
	I0414 17:31:33.933121  464072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/proxy-client.crt ...
	I0414 17:31:33.933156  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/proxy-client.crt: {Name:mk21e9727d38ee90f0ae63fe5e760a891e3752a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:33.933339  464072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/proxy-client.key ...
	I0414 17:31:33.933367  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/proxy-client.key: {Name:mk7272008ae704bfc63e37afc654e5bc66570ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:33.933552  464072 certs.go:484] found cert: /home/jenkins/minikube-integration/20201-457936/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:31:33.933594  464072 certs.go:484] found cert: /home/jenkins/minikube-integration/20201-457936/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:31:33.933639  464072 certs.go:484] found cert: /home/jenkins/minikube-integration/20201-457936/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:31:33.933671  464072 certs.go:484] found cert: /home/jenkins/minikube-integration/20201-457936/.minikube/certs/key.pem (1679 bytes)
	I0414 17:31:33.934279  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:31:33.958898  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:31:33.982093  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:31:34.014785  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:31:34.039447  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 17:31:34.064025  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 17:31:34.088853  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:31:34.112615  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 17:31:34.136061  464072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20201-457936/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:31:34.160618  464072 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:31:34.177796  464072 ssh_runner.go:195] Run: openssl version
	I0414 17:31:34.183368  464072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:31:34.192773  464072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:31:34.196089  464072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:31:34.196155  464072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:31:34.202847  464072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:31:34.211990  464072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:31:34.215086  464072 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 17:31:34.215135  464072 kubeadm.go:392] StartCluster: {Name:addons-225375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-225375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:31:34.215206  464072 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:31:34.215266  464072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:31:34.250829  464072 cri.go:89] found id: ""
	I0414 17:31:34.250951  464072 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:31:34.259671  464072 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:31:34.268312  464072 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0414 17:31:34.268378  464072 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:31:34.276799  464072 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:31:34.276823  464072 kubeadm.go:157] found existing configuration files:
	
	I0414 17:31:34.276892  464072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:31:34.285653  464072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:31:34.285741  464072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:31:34.293950  464072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:31:34.302577  464072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:31:34.302651  464072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:31:34.310679  464072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:31:34.319169  464072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:31:34.319244  464072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:31:34.327685  464072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:31:34.336054  464072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:31:34.336147  464072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:31:34.344466  464072 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0414 17:31:34.382715  464072 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 17:31:34.383019  464072 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:31:34.421248  464072 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0414 17:31:34.421323  464072 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1081-aws
	I0414 17:31:34.421363  464072 kubeadm.go:310] OS: Linux
	I0414 17:31:34.421413  464072 kubeadm.go:310] CGROUPS_CPU: enabled
	I0414 17:31:34.421464  464072 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0414 17:31:34.421514  464072 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0414 17:31:34.421566  464072 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0414 17:31:34.421617  464072 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0414 17:31:34.421682  464072 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0414 17:31:34.421732  464072 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0414 17:31:34.421783  464072 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0414 17:31:34.421833  464072 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0414 17:31:34.497011  464072 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:31:34.497131  464072 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:31:34.497227  464072 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 17:31:34.503976  464072 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:31:34.510140  464072 out.go:235]   - Generating certificates and keys ...
	I0414 17:31:34.510307  464072 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:31:34.510485  464072 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:31:34.968997  464072 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 17:31:35.515457  464072 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 17:31:35.858653  464072 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 17:31:37.211210  464072 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 17:31:37.757097  464072 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 17:31:37.757245  464072 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-225375 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0414 17:31:38.267728  464072 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 17:31:38.268167  464072 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-225375 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0414 17:31:38.651195  464072 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 17:31:38.983210  464072 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 17:31:39.424959  464072 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 17:31:39.425234  464072 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:31:40.416642  464072 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:31:40.617106  464072 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 17:31:40.841240  464072 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:31:41.270657  464072 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:31:41.936334  464072 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:31:41.937371  464072 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:31:41.942672  464072 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:31:41.946181  464072 out.go:235]   - Booting up control plane ...
	I0414 17:31:41.946285  464072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:31:41.952821  464072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:31:41.954933  464072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:31:41.970348  464072 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:31:41.976660  464072 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:31:41.976715  464072 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:31:42.072607  464072 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 17:31:42.072735  464072 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 17:31:44.073685  464072 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001537016s
	I0414 17:31:44.073791  464072 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 17:31:49.575682  464072 kubeadm.go:310] [api-check] The API server is healthy after 5.501974452s
	I0414 17:31:49.596715  464072 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 17:31:49.609739  464072 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 17:31:49.641065  464072 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 17:31:49.641279  464072 kubeadm.go:310] [mark-control-plane] Marking the node addons-225375 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 17:31:49.653348  464072 kubeadm.go:310] [bootstrap-token] Using token: kcr3r4.z8eoeojieab0t9vb
	I0414 17:31:49.658368  464072 out.go:235]   - Configuring RBAC rules ...
	I0414 17:31:49.658521  464072 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 17:31:49.660761  464072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 17:31:49.669565  464072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 17:31:49.673362  464072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 17:31:49.676954  464072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 17:31:49.680202  464072 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 17:31:49.982467  464072 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 17:31:50.448434  464072 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 17:31:50.983349  464072 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 17:31:50.984515  464072 kubeadm.go:310] 
	I0414 17:31:50.984591  464072 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 17:31:50.984604  464072 kubeadm.go:310] 
	I0414 17:31:50.984683  464072 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 17:31:50.984712  464072 kubeadm.go:310] 
	I0414 17:31:50.984742  464072 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 17:31:50.984803  464072 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 17:31:50.984857  464072 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 17:31:50.984865  464072 kubeadm.go:310] 
	I0414 17:31:50.984918  464072 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 17:31:50.984925  464072 kubeadm.go:310] 
	I0414 17:31:50.984972  464072 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 17:31:50.984980  464072 kubeadm.go:310] 
	I0414 17:31:50.985031  464072 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 17:31:50.985108  464072 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 17:31:50.985193  464072 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 17:31:50.985202  464072 kubeadm.go:310] 
	I0414 17:31:50.985289  464072 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 17:31:50.985371  464072 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 17:31:50.985379  464072 kubeadm.go:310] 
	I0414 17:31:50.985470  464072 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kcr3r4.z8eoeojieab0t9vb \
	I0414 17:31:50.985578  464072 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bb7de4c0fe8f35d6d1eb28db0050346749c5756c8c4e2da967ad68b93d10d0ea \
	I0414 17:31:50.985604  464072 kubeadm.go:310] 	--control-plane 
	I0414 17:31:50.985613  464072 kubeadm.go:310] 
	I0414 17:31:50.985697  464072 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 17:31:50.985706  464072 kubeadm.go:310] 
	I0414 17:31:50.985787  464072 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kcr3r4.z8eoeojieab0t9vb \
	I0414 17:31:50.985891  464072 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bb7de4c0fe8f35d6d1eb28db0050346749c5756c8c4e2da967ad68b93d10d0ea 
	I0414 17:31:50.989874  464072 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0414 17:31:50.990096  464072 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1081-aws\n", err: exit status 1
	I0414 17:31:50.990203  464072 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:31:50.990225  464072 cni.go:84] Creating CNI manager for ""
	I0414 17:31:50.990233  464072 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0414 17:31:50.993492  464072 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 17:31:50.996499  464072 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 17:31:51.001157  464072 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 17:31:51.001184  464072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 17:31:51.022787  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 17:31:51.315821  464072 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:31:51.315887  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:51.315980  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-225375 minikube.k8s.io/updated_at=2025_04_14T17_31_51_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6d9971b005454362d638ce6593a2c72bc063c6f0 minikube.k8s.io/name=addons-225375 minikube.k8s.io/primary=true
	I0414 17:31:51.508182  464072 ops.go:34] apiserver oom_adj: -16
	I0414 17:31:51.508286  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:52.008950  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:52.508869  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:53.009001  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:53.509296  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:54.009009  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:54.508394  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:55.008961  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:55.508806  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:56.008395  464072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:31:56.205490  464072 kubeadm.go:1113] duration metric: took 4.889659605s to wait for elevateKubeSystemPrivileges
	I0414 17:31:56.205522  464072 kubeadm.go:394] duration metric: took 21.990390189s to StartCluster
	I0414 17:31:56.205540  464072 settings.go:142] acquiring lock: {Name:mk88baba8190d295c56f4af2633b8e08c5a0d8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:56.205661  464072 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20201-457936/kubeconfig
	I0414 17:31:56.206046  464072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/kubeconfig: {Name:mk79068ab48bf688e1b7e5f6f8adb971b2138d48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:31:56.206906  464072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 17:31:56.206948  464072 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:31:56.207173  464072 config.go:182] Loaded profile config "addons-225375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:31:56.207204  464072 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0414 17:31:56.207303  464072 addons.go:69] Setting yakd=true in profile "addons-225375"
	I0414 17:31:56.207326  464072 addons.go:238] Setting addon yakd=true in "addons-225375"
	I0414 17:31:56.207348  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.207834  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208033  464072 addons.go:69] Setting inspektor-gadget=true in profile "addons-225375"
	I0414 17:31:56.208059  464072 addons.go:238] Setting addon inspektor-gadget=true in "addons-225375"
	I0414 17:31:56.208107  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.208274  464072 addons.go:69] Setting metrics-server=true in profile "addons-225375"
	I0414 17:31:56.208297  464072 addons.go:238] Setting addon metrics-server=true in "addons-225375"
	I0414 17:31:56.208329  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.208724  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208764  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208773  464072 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-225375"
	I0414 17:31:56.213240  464072 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-225375"
	I0414 17:31:56.213356  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.213980  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.213988  464072 out.go:177] * Verifying Kubernetes components...
	I0414 17:31:56.208787  464072 addons.go:69] Setting storage-provisioner=true in profile "addons-225375"
	I0414 17:31:56.214339  464072 addons.go:238] Setting addon storage-provisioner=true in "addons-225375"
	I0414 17:31:56.214377  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.214806  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.220766  464072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:31:56.208792  464072 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-225375"
	I0414 17:31:56.221274  464072 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-225375"
	I0414 17:31:56.221612  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208804  464072 addons.go:69] Setting volcano=true in profile "addons-225375"
	I0414 17:31:56.234951  464072 addons.go:238] Setting addon volcano=true in "addons-225375"
	I0414 17:31:56.235005  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.235510  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208811  464072 addons.go:69] Setting volumesnapshots=true in profile "addons-225375"
	I0414 17:31:56.253246  464072 addons.go:238] Setting addon volumesnapshots=true in "addons-225375"
	I0414 17:31:56.253288  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.253776  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208864  464072 addons.go:69] Setting default-storageclass=true in profile "addons-225375"
	I0414 17:31:56.264677  464072 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-225375"
	I0414 17:31:56.265017  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208868  464072 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-225375"
	I0414 17:31:56.289010  464072 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-225375"
	I0414 17:31:56.289050  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.289534  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208872  464072 addons.go:69] Setting cloud-spanner=true in profile "addons-225375"
	I0414 17:31:56.299820  464072 addons.go:238] Setting addon cloud-spanner=true in "addons-225375"
	I0414 17:31:56.299862  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.300337  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208875  464072 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-225375"
	I0414 17:31:56.332685  464072 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-225375"
	I0414 17:31:56.332723  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.333391  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208880  464072 addons.go:69] Setting ingress=true in profile "addons-225375"
	I0414 17:31:56.347695  464072 addons.go:238] Setting addon ingress=true in "addons-225375"
	I0414 17:31:56.347752  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.348289  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208883  464072 addons.go:69] Setting gcp-auth=true in profile "addons-225375"
	I0414 17:31:56.371582  464072 mustload.go:65] Loading cluster: addons-225375
	I0414 17:31:56.371890  464072 config.go:182] Loaded profile config "addons-225375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:31:56.372252  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.208886  464072 addons.go:69] Setting ingress-dns=true in profile "addons-225375"
	I0414 17:31:56.397080  464072 addons.go:238] Setting addon ingress-dns=true in "addons-225375"
	I0414 17:31:56.397163  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.397716  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.407285  464072 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0414 17:31:56.411866  464072 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0414 17:31:56.411894  464072 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0414 17:31:56.412038  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.208781  464072 addons.go:69] Setting registry=true in profile "addons-225375"
	I0414 17:31:56.414782  464072 addons.go:238] Setting addon registry=true in "addons-225375"
	I0414 17:31:56.414829  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.415375  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.438576  464072 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0414 17:31:56.444216  464072 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0414 17:31:56.444244  464072 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0414 17:31:56.444321  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.464928  464072 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0414 17:31:56.469803  464072 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:31:56.469981  464072 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 17:31:56.470008  464072 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 17:31:56.470115  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.472884  464072 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:31:56.472905  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:31:56.472969  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	W0414 17:31:56.496676  464072 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0414 17:31:56.499333  464072 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-225375"
	I0414 17:31:56.499368  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.499784  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.541464  464072 addons.go:238] Setting addon default-storageclass=true in "addons-225375"
	I0414 17:31:56.541527  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.542226  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:31:56.549221  464072 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0414 17:31:56.558172  464072 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0414 17:31:56.558246  464072 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0414 17:31:56.558370  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.588936  464072 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0414 17:31:56.597842  464072 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 17:31:56.597912  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0414 17:31:56.598007  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.642275  464072 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0414 17:31:56.645310  464072 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.31
	I0414 17:31:56.645629  464072 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 17:31:56.645658  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0414 17:31:56.645751  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.648338  464072 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0414 17:31:56.648357  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0414 17:31:56.648423  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.659426  464072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:31:56.659979  464072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 17:31:56.681969  464072 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0414 17:31:56.686780  464072 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 17:31:56.686846  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0414 17:31:56.686957  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.694607  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:31:56.696664  464072 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2
	I0414 17:31:56.703639  464072 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2
	I0414 17:31:56.706536  464072 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.1
	I0414 17:31:56.706654  464072 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0414 17:31:56.710155  464072 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 17:31:56.710180  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0414 17:31:56.710248  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.678909  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.717694  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.720135  464072 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0414 17:31:56.722749  464072 out.go:177]   - Using image docker.io/registry:2.8.3
	I0414 17:31:56.722849  464072 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0414 17:31:56.725487  464072 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0414 17:31:56.728550  464072 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0414 17:31:56.728578  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0414 17:31:56.728693  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.732820  464072 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0414 17:31:56.735461  464072 out.go:177]   - Using image docker.io/busybox:stable
	I0414 17:31:56.758196  464072 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 17:31:56.758219  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0414 17:31:56.758284  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.786388  464072 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0414 17:31:56.789374  464072 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0414 17:31:56.792226  464072 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0414 17:31:56.794646  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.806883  464072 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0414 17:31:56.807564  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.816521  464072 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0414 17:31:56.826955  464072 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0414 17:31:56.827103  464072 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0414 17:31:56.827272  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.836272  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.847282  464072 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:31:56.847302  464072 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:31:56.847362  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:31:56.862466  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.867266  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.911103  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.911189  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.932072  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.933227  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.950079  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.962991  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:31:56.965762  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	W0414 17:31:56.968294  464072 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0414 17:31:56.968326  464072 retry.go:31] will retry after 290.593093ms: ssh: handshake failed: EOF
	I0414 17:31:57.216410  464072 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 17:31:57.216441  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0414 17:31:57.298121  464072 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0414 17:31:57.298146  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0414 17:31:57.304596  464072 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0414 17:31:57.304621  464072 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0414 17:31:57.312375  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:31:57.331198  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 17:31:57.341577  464072 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0414 17:31:57.341611  464072 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0414 17:31:57.393685  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 17:31:57.413771  464072 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 17:31:57.413810  464072 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 17:31:57.420977  464072 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0414 17:31:57.421020  464072 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0414 17:31:57.422550  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:31:57.447413  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0414 17:31:57.450461  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 17:31:57.472471  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 17:31:57.500483  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 17:31:57.513030  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0414 17:31:57.515109  464072 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0414 17:31:57.515133  464072 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0414 17:31:57.532547  464072 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0414 17:31:57.532576  464072 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0414 17:31:57.574078  464072 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0414 17:31:57.574104  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0414 17:31:57.606839  464072 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:31:57.606866  464072 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 17:31:57.680690  464072 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0414 17:31:57.680715  464072 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0414 17:31:57.740545  464072 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0414 17:31:57.740573  464072 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0414 17:31:57.761560  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:31:57.765922  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0414 17:31:57.779864  464072 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0414 17:31:57.779913  464072 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0414 17:31:57.864381  464072 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0414 17:31:57.864415  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0414 17:31:57.933866  464072 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0414 17:31:57.933892  464072 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0414 17:31:57.965425  464072 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0414 17:31:57.965453  464072 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0414 17:31:58.065556  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0414 17:31:58.151479  464072 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 17:31:58.151504  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0414 17:31:58.181701  464072 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0414 17:31:58.181741  464072 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0414 17:31:58.296150  464072 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0414 17:31:58.296222  464072 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0414 17:31:58.352156  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 17:31:58.459969  464072 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0414 17:31:58.460045  464072 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0414 17:31:58.556914  464072 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0414 17:31:58.556985  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0414 17:31:58.701129  464072 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.041656379s)
	I0414 17:31:58.701895  464072 node_ready.go:35] waiting up to 6m0s for node "addons-225375" to be "Ready" ...
	I0414 17:31:58.701965  464072 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.041964779s)
	I0414 17:31:58.702107  464072 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0414 17:31:58.715841  464072 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0414 17:31:58.715905  464072 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0414 17:31:58.882426  464072 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0414 17:31:58.882502  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0414 17:31:59.052745  464072 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0414 17:31:59.052828  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0414 17:31:59.207864  464072 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 17:31:59.207936  464072 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0414 17:31:59.419515  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 17:31:59.713452  464072 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-225375" context rescaled to 1 replicas
	I0414 17:32:00.728522  464072 node_ready.go:53] node "addons-225375" has status "Ready":"False"
	I0414 17:32:02.373665  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.061254329s)
	I0414 17:32:02.373723  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.042500466s)
	I0414 17:32:03.212238  464072 node_ready.go:53] node "addons-225375" has status "Ready":"False"
	I0414 17:32:03.440716  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.018123955s)
	I0414 17:32:03.440776  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.99333849s)
	I0414 17:32:03.440870  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.96837516s)
	I0414 17:32:03.440906  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.940394762s)
	I0414 17:32:03.440960  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.927909918s)
	I0414 17:32:03.441003  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.679420967s)
	I0414 17:32:03.441399  464072 addons.go:479] Verifying addon metrics-server=true in "addons-225375"
	I0414 17:32:03.441035  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.675088423s)
	I0414 17:32:03.441414  464072 addons.go:479] Verifying addon registry=true in "addons-225375"
	I0414 17:32:03.441063  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.375472342s)
	I0414 17:32:03.441810  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.048098322s)
	I0414 17:32:03.441840  464072 addons.go:479] Verifying addon ingress=true in "addons-225375"
	I0414 17:32:03.441115  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.990336173s)
	I0414 17:32:03.444577  464072 out.go:177] * Verifying registry addon...
	I0414 17:32:03.446443  464072 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-225375 service yakd-dashboard -n yakd-dashboard
	
	I0414 17:32:03.446520  464072 out.go:177] * Verifying ingress addon...
	I0414 17:32:03.449228  464072 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0414 17:32:03.450947  464072 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0414 17:32:03.458564  464072 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0414 17:32:03.458587  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:03.458780  464072 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0414 17:32:03.458814  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0414 17:32:03.468841  464072 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0414 17:32:03.496483  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.144243488s)
	W0414 17:32:03.496525  464072 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 17:32:03.496546  464072 retry.go:31] will retry after 297.366514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 17:32:03.794842  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 17:32:03.804099  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.384521766s)
	I0414 17:32:03.804137  464072 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-225375"
	I0414 17:32:03.809210  464072 out.go:177] * Verifying csi-hostpath-driver addon...
	I0414 17:32:03.812992  464072 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0414 17:32:03.825154  464072 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0414 17:32:03.825220  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:03.952641  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:03.954567  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:03.964875  464072 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0414 17:32:03.965022  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:32:03.988490  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:32:04.099977  464072 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0414 17:32:04.120812  464072 addons.go:238] Setting addon gcp-auth=true in "addons-225375"
	I0414 17:32:04.120886  464072 host.go:66] Checking if "addons-225375" exists ...
	I0414 17:32:04.121408  464072 cli_runner.go:164] Run: docker container inspect addons-225375 --format={{.State.Status}}
	I0414 17:32:04.152236  464072 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0414 17:32:04.152297  464072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-225375
	I0414 17:32:04.178613  464072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/addons-225375/id_rsa Username:docker}
	I0414 17:32:04.317789  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:04.454885  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:04.455158  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:04.817195  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:04.952911  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:04.954005  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:05.318347  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:05.452275  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:05.454242  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:05.705725  464072 node_ready.go:53] node "addons-225375" has status "Ready":"False"
	I0414 17:32:05.816413  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:05.953394  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:05.954919  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:06.319826  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:06.454572  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:06.454796  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:06.547037  464072 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.394765722s)
	I0414 17:32:06.547126  464072 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.752088288s)
	I0414 17:32:06.550407  464072 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0414 17:32:06.554060  464072 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2
	I0414 17:32:06.556681  464072 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0414 17:32:06.556709  464072 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0414 17:32:06.575897  464072 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0414 17:32:06.575922  464072 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0414 17:32:06.595388  464072 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 17:32:06.595412  464072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0414 17:32:06.613730  464072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 17:32:06.817464  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:06.952124  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:06.954238  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:07.128289  464072 addons.go:479] Verifying addon gcp-auth=true in "addons-225375"
	I0414 17:32:07.131490  464072 out.go:177] * Verifying gcp-auth addon...
	I0414 17:32:07.135126  464072 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0414 17:32:07.141000  464072 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0414 17:32:07.141029  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:07.316564  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:07.452580  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:07.460239  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:07.637848  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:07.816125  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:07.953900  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:07.954554  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:08.137935  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:08.204657  464072 node_ready.go:53] node "addons-225375" has status "Ready":"False"
	I0414 17:32:08.318242  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:08.452731  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:08.454798  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:08.638734  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:08.816548  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:08.952400  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:08.954480  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:09.138011  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:09.316508  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:09.452200  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:09.453785  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:09.639098  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:09.816352  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:09.952298  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:09.954373  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:10.138118  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:10.204909  464072 node_ready.go:53] node "addons-225375" has status "Ready":"False"
	I0414 17:32:10.317044  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:10.452766  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:10.454063  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:10.641363  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:10.815991  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:10.952708  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:10.953607  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:11.138638  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:11.316218  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:11.452604  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:11.454167  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:11.638411  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:11.816190  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:11.952094  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:11.954048  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:12.137719  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:12.205431  464072 node_ready.go:53] node "addons-225375" has status "Ready":"False"
	I0414 17:32:12.316325  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:12.452458  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:12.454043  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:12.638819  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:12.816236  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:12.951984  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:12.954139  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:13.145425  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:13.219822  464072 node_ready.go:49] node "addons-225375" has status "Ready":"True"
	I0414 17:32:13.219893  464072 node_ready.go:38] duration metric: took 14.517883746s for node "addons-225375" to be "Ready" ...
	I0414 17:32:13.219917  464072 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:32:13.291402  464072 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-hgkqw" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:13.355483  464072 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0414 17:32:13.355553  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:13.463962  464072 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0414 17:32:13.464030  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:13.464380  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:13.659838  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:13.816876  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:13.954426  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:13.954848  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:14.138879  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:14.317752  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:14.465941  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:14.466431  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:14.638963  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:14.817789  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:14.953764  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:14.955788  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:15.152973  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:15.300713  464072 pod_ready.go:103] pod "coredns-668d6bf9bc-hgkqw" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:15.320737  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:15.460254  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:15.460717  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:15.639909  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:15.803013  464072 pod_ready.go:93] pod "coredns-668d6bf9bc-hgkqw" in "kube-system" namespace has status "Ready":"True"
	I0414 17:32:15.803039  464072 pod_ready.go:82] duration metric: took 2.511550283s for pod "coredns-668d6bf9bc-hgkqw" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:15.803064  464072 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-225375" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:15.810942  464072 pod_ready.go:93] pod "etcd-addons-225375" in "kube-system" namespace has status "Ready":"True"
	I0414 17:32:15.810968  464072 pod_ready.go:82] duration metric: took 7.896463ms for pod "etcd-addons-225375" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:15.810992  464072 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-225375" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:15.818078  464072 pod_ready.go:93] pod "kube-apiserver-addons-225375" in "kube-system" namespace has status "Ready":"True"
	I0414 17:32:15.818104  464072 pod_ready.go:82] duration metric: took 7.104058ms for pod "kube-apiserver-addons-225375" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:15.818116  464072 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-225375" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:15.818726  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:15.824044  464072 pod_ready.go:93] pod "kube-controller-manager-addons-225375" in "kube-system" namespace has status "Ready":"True"
	I0414 17:32:15.824081  464072 pod_ready.go:82] duration metric: took 5.957379ms for pod "kube-controller-manager-addons-225375" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:15.824096  464072 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7r8l6" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:15.829306  464072 pod_ready.go:93] pod "kube-proxy-7r8l6" in "kube-system" namespace has status "Ready":"True"
	I0414 17:32:15.829340  464072 pod_ready.go:82] duration metric: took 5.236638ms for pod "kube-proxy-7r8l6" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:15.829353  464072 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-225375" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:15.954606  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:15.957550  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:16.139241  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:16.195125  464072 pod_ready.go:93] pod "kube-scheduler-addons-225375" in "kube-system" namespace has status "Ready":"True"
	I0414 17:32:16.195151  464072 pod_ready.go:82] duration metric: took 365.789608ms for pod "kube-scheduler-addons-225375" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:16.195163  464072 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:16.316228  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:16.452239  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:16.455414  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:16.638722  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:16.817507  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:16.952422  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:16.954362  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:17.138664  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:17.317899  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:17.455359  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:17.456816  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:17.638896  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:17.820936  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:17.954045  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:17.954178  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:18.146504  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:18.201173  464072 pod_ready.go:103] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:18.316761  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:18.453483  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:18.453720  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:18.638742  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:18.819015  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:18.953922  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:18.954340  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:19.139167  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:19.316428  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:19.452798  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:19.455265  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:19.639629  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:19.817590  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:19.957182  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:19.957519  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:20.139180  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:20.202364  464072 pod_ready.go:103] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:20.320856  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:20.452515  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:20.455026  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:20.637770  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:20.816874  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:20.954545  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:20.955183  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:21.138283  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:21.317003  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:21.454371  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:21.454159  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:21.638401  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:21.818285  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:21.954758  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:21.954789  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:22.139548  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:22.316830  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:22.454081  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:22.454779  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:22.637812  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:22.701061  464072 pod_ready.go:103] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:22.817229  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:22.952896  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:22.955139  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:23.138500  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:23.318371  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:23.453610  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:23.456161  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:23.638842  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:23.816754  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:23.952626  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:23.954399  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:24.138023  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:24.315998  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:24.453812  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:24.453945  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:24.638628  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:24.701642  464072 pod_ready.go:103] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:24.816658  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:24.952124  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:24.953763  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:25.152155  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:25.317261  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:25.454258  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:25.454719  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:25.639199  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:25.817892  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:25.961367  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:25.961714  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:26.138927  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:26.317817  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:26.452503  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:26.454159  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:26.638436  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:26.816423  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:26.953358  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:26.954693  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:27.138258  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:27.200373  464072 pod_ready.go:103] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:27.316691  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:27.452613  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:27.454513  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:27.638880  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:27.816365  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:27.953630  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:27.954158  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:28.140525  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:28.323135  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:28.453512  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:28.457032  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:28.638742  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:28.818367  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:28.954130  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:28.957704  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:29.143967  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:29.201631  464072 pod_ready.go:103] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:29.317403  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:29.454014  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:29.455179  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:29.638880  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:29.816757  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:29.955290  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:29.955587  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:30.139568  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:30.318237  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:30.455092  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:30.455192  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:30.638509  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:30.817197  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:30.955666  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:30.956041  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:31.138722  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:31.318015  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:31.452911  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:31.454127  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:31.637738  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:31.702971  464072 pod_ready.go:103] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:31.820174  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:31.955660  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:31.956490  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:32.138674  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:32.317792  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:32.454823  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:32.455174  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:32.639238  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:32.818047  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:32.954431  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:32.955005  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:33.137895  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:33.317937  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:33.454581  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:33.456326  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:33.639324  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:33.821036  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:33.955105  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:33.955321  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:34.140294  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:34.201393  464072 pod_ready.go:103] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:34.317214  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:34.452414  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:34.454308  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:34.638097  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:34.816708  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:34.952233  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:34.953903  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:35.137589  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:35.318685  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:35.452948  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:35.455250  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:35.638988  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:35.816797  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:35.957593  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:35.958025  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:36.140034  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:36.316486  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:36.452135  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:36.453989  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:36.638439  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:36.703364  464072 pod_ready.go:103] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:36.822311  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:36.955062  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:36.955365  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:37.138289  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:37.317232  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:37.453196  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:37.456177  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:37.640605  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:37.817706  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:37.955189  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:37.955786  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:38.139053  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:38.333600  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:38.459794  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:38.460410  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:38.638947  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:38.706045  464072 pod_ready.go:103] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:38.817204  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:38.962809  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:38.978165  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:39.139450  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:39.317693  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:39.452616  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:39.456142  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:39.673144  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:39.723337  464072 pod_ready.go:93] pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace has status "Ready":"True"
	I0414 17:32:39.723408  464072 pod_ready.go:82] duration metric: took 23.528237818s for pod "metrics-server-7fbb699795-cckq5" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:39.723436  464072 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-66f28" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:39.824600  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:39.962970  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:39.963544  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:40.139162  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:40.318140  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:40.452644  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:40.455381  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:40.638435  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:40.816573  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:40.956269  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:40.957110  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:41.138291  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:41.317050  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:41.453341  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:41.453740  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:41.638699  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:41.728922  464072 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-66f28" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:41.816624  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:41.952326  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:41.954520  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:42.139590  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:42.316581  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:42.452329  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:42.454718  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:42.638825  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:42.816796  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:42.952891  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:42.954463  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:43.142406  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:43.316420  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:43.454268  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:43.454611  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:43.648921  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:43.729365  464072 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-66f28" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:43.816615  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:43.953258  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:43.954797  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:44.139555  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:44.317951  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:44.455716  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:44.456114  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:44.637936  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:44.817260  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:44.955930  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:44.956285  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:45.139507  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:45.318531  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:45.454731  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:45.455089  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:45.638406  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:45.816621  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:45.957111  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:45.958139  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:46.138662  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:46.229694  464072 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-66f28" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:46.318999  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:46.453938  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:46.454496  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:46.638799  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:46.816308  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:46.953289  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:46.955061  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:47.138374  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:47.321416  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:47.454273  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:47.457579  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:47.639160  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:47.818012  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:47.954777  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:47.956972  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:48.138671  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:48.236144  464072 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-66f28" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:48.317212  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:48.453506  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:48.455344  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:48.638435  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:48.816527  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:48.953690  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:48.955276  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:49.138046  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:49.316167  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:49.454639  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:49.455440  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:49.639925  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:49.816854  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:49.953775  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:49.957350  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:50.139614  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:50.318193  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:50.455864  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:50.457346  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:50.638015  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:50.732879  464072 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-66f28" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:50.818169  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:50.958875  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:50.959260  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:51.138499  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:51.317379  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:51.452731  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:51.453907  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:51.639175  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:51.816470  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:51.956635  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:51.957231  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:52.138086  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:52.317036  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:52.452071  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:52.454585  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:52.638873  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:52.745993  464072 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-66f28" in "kube-system" namespace has status "Ready":"False"
	I0414 17:32:52.816326  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:52.952760  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:52.955596  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:53.138910  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:53.316721  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:53.460863  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:53.461413  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:53.640804  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:53.816764  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:53.953464  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:53.954937  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:54.139092  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:54.229779  464072 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-66f28" in "kube-system" namespace has status "Ready":"True"
	I0414 17:32:54.229840  464072 pod_ready.go:82] duration metric: took 14.506384204s for pod "nvidia-device-plugin-daemonset-66f28" in "kube-system" namespace to be "Ready" ...
	I0414 17:32:54.229889  464072 pod_ready.go:39] duration metric: took 41.009942658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:32:54.229922  464072 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:32:54.230015  464072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:32:54.271100  464072 api_server.go:72] duration metric: took 58.064105355s to wait for apiserver process to appear ...
	I0414 17:32:54.271189  464072 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:32:54.271223  464072 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0414 17:32:54.286390  464072 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0414 17:32:54.288064  464072 api_server.go:141] control plane version: v1.32.2
	I0414 17:32:54.288129  464072 api_server.go:131] duration metric: took 16.918804ms to wait for apiserver health ...
	I0414 17:32:54.288151  464072 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:32:54.291962  464072 system_pods.go:59] 18 kube-system pods found
	I0414 17:32:54.292031  464072 system_pods.go:61] "coredns-668d6bf9bc-hgkqw" [7dc75b7e-60d8-41a2-9403-5ad6daab6b61] Running
	I0414 17:32:54.292051  464072 system_pods.go:61] "csi-hostpath-attacher-0" [f736264a-32c5-4da0-b957-837a7698aac0] Running
	I0414 17:32:54.292073  464072 system_pods.go:61] "csi-hostpath-resizer-0" [f972f5ab-7ca5-4ab0-82b7-27724ad62184] Running
	I0414 17:32:54.292113  464072 system_pods.go:61] "csi-hostpathplugin-gfx6q" [88aa6dad-ad9b-4568-8dcb-bddc7f2aa8a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 17:32:54.292132  464072 system_pods.go:61] "etcd-addons-225375" [e4469704-01c9-4a19-8c5e-3ae27ae37814] Running
	I0414 17:32:54.292156  464072 system_pods.go:61] "kindnet-krk7g" [4007e427-86bc-45a8-8c7a-cc35ed614201] Running
	I0414 17:32:54.292189  464072 system_pods.go:61] "kube-apiserver-addons-225375" [c902af95-c4a4-42ab-956a-e919c572216f] Running
	I0414 17:32:54.292210  464072 system_pods.go:61] "kube-controller-manager-addons-225375" [064fab7e-b2c6-497a-9f54-daa940e57949] Running
	I0414 17:32:54.292235  464072 system_pods.go:61] "kube-ingress-dns-minikube" [10d149e3-e641-47e6-950f-7a402e9797db] Running
	I0414 17:32:54.292271  464072 system_pods.go:61] "kube-proxy-7r8l6" [ea295f31-5081-44ec-b6d5-236948d87819] Running
	I0414 17:32:54.292292  464072 system_pods.go:61] "kube-scheduler-addons-225375" [74778d1c-d231-43ea-9103-14d5928bc71a] Running
	I0414 17:32:54.292329  464072 system_pods.go:61] "metrics-server-7fbb699795-cckq5" [3c6d4f27-a10c-41ac-89fe-ca8d03b92b77] Running
	I0414 17:32:54.292351  464072 system_pods.go:61] "nvidia-device-plugin-daemonset-66f28" [38c4e820-97af-4507-8c70-478ee09911c6] Running
	I0414 17:32:54.292376  464072 system_pods.go:61] "registry-6c88467877-n96sg" [d9b0d607-5c52-4819-a74e-00b7577de414] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0414 17:32:54.292413  464072 system_pods.go:61] "registry-proxy-sczjl" [52f0da01-0637-45eb-b3a3-30867a12edd4] Running
	I0414 17:32:54.292438  464072 system_pods.go:61] "snapshot-controller-68b874b76f-9r86j" [0787993b-9fed-474e-8324-136ba353a104] Running
	I0414 17:32:54.292461  464072 system_pods.go:61] "snapshot-controller-68b874b76f-zrhqx" [11d3b1e8-036c-4ce0-870d-3d9bde2783e8] Running
	I0414 17:32:54.292496  464072 system_pods.go:61] "storage-provisioner" [e49aa15c-b54b-4754-af79-436b0126551d] Running
	I0414 17:32:54.292521  464072 system_pods.go:74] duration metric: took 4.350104ms to wait for pod list to return data ...
	I0414 17:32:54.292544  464072 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:32:54.295221  464072 default_sa.go:45] found service account: "default"
	I0414 17:32:54.295280  464072 default_sa.go:55] duration metric: took 2.703099ms for default service account to be created ...
	I0414 17:32:54.295302  464072 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:32:54.301276  464072 system_pods.go:86] 18 kube-system pods found
	I0414 17:32:54.301349  464072 system_pods.go:89] "coredns-668d6bf9bc-hgkqw" [7dc75b7e-60d8-41a2-9403-5ad6daab6b61] Running
	I0414 17:32:54.301372  464072 system_pods.go:89] "csi-hostpath-attacher-0" [f736264a-32c5-4da0-b957-837a7698aac0] Running
	I0414 17:32:54.301397  464072 system_pods.go:89] "csi-hostpath-resizer-0" [f972f5ab-7ca5-4ab0-82b7-27724ad62184] Running
	I0414 17:32:54.301439  464072 system_pods.go:89] "csi-hostpathplugin-gfx6q" [88aa6dad-ad9b-4568-8dcb-bddc7f2aa8a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 17:32:54.301460  464072 system_pods.go:89] "etcd-addons-225375" [e4469704-01c9-4a19-8c5e-3ae27ae37814] Running
	I0414 17:32:54.301500  464072 system_pods.go:89] "kindnet-krk7g" [4007e427-86bc-45a8-8c7a-cc35ed614201] Running
	I0414 17:32:54.301526  464072 system_pods.go:89] "kube-apiserver-addons-225375" [c902af95-c4a4-42ab-956a-e919c572216f] Running
	I0414 17:32:54.301550  464072 system_pods.go:89] "kube-controller-manager-addons-225375" [064fab7e-b2c6-497a-9f54-daa940e57949] Running
	I0414 17:32:54.301588  464072 system_pods.go:89] "kube-ingress-dns-minikube" [10d149e3-e641-47e6-950f-7a402e9797db] Running
	I0414 17:32:54.301610  464072 system_pods.go:89] "kube-proxy-7r8l6" [ea295f31-5081-44ec-b6d5-236948d87819] Running
	I0414 17:32:54.301633  464072 system_pods.go:89] "kube-scheduler-addons-225375" [74778d1c-d231-43ea-9103-14d5928bc71a] Running
	I0414 17:32:54.301667  464072 system_pods.go:89] "metrics-server-7fbb699795-cckq5" [3c6d4f27-a10c-41ac-89fe-ca8d03b92b77] Running
	I0414 17:32:54.301692  464072 system_pods.go:89] "nvidia-device-plugin-daemonset-66f28" [38c4e820-97af-4507-8c70-478ee09911c6] Running
	I0414 17:32:54.301717  464072 system_pods.go:89] "registry-6c88467877-n96sg" [d9b0d607-5c52-4819-a74e-00b7577de414] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0414 17:32:54.301753  464072 system_pods.go:89] "registry-proxy-sczjl" [52f0da01-0637-45eb-b3a3-30867a12edd4] Running
	I0414 17:32:54.301777  464072 system_pods.go:89] "snapshot-controller-68b874b76f-9r86j" [0787993b-9fed-474e-8324-136ba353a104] Running
	I0414 17:32:54.301806  464072 system_pods.go:89] "snapshot-controller-68b874b76f-zrhqx" [11d3b1e8-036c-4ce0-870d-3d9bde2783e8] Running
	I0414 17:32:54.301841  464072 system_pods.go:89] "storage-provisioner" [e49aa15c-b54b-4754-af79-436b0126551d] Running
	I0414 17:32:54.301867  464072 system_pods.go:126] duration metric: took 6.532364ms to wait for k8s-apps to be running ...
	I0414 17:32:54.301890  464072 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:32:54.301974  464072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:32:54.316355  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:54.357487  464072 system_svc.go:56] duration metric: took 55.576861ms WaitForService to wait for kubelet
	I0414 17:32:54.357567  464072 kubeadm.go:582] duration metric: took 58.150577964s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:32:54.357600  464072 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:32:54.362217  464072 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0414 17:32:54.362258  464072 node_conditions.go:123] node cpu capacity is 2
	I0414 17:32:54.362273  464072 node_conditions.go:105] duration metric: took 4.640329ms to run NodePressure ...
	I0414 17:32:54.362287  464072 start.go:241] waiting for startup goroutines ...
	I0414 17:32:54.453275  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:54.457775  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:54.639200  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:54.816743  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:54.954146  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:54.954382  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:55.138563  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:55.316571  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:55.452067  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 17:32:55.454487  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:55.638523  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:55.816811  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:55.963150  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:55.963520  464072 kapi.go:107] duration metric: took 52.514294321s to wait for kubernetes.io/minikube-addons=registry ...
	I0414 17:32:56.138510  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:56.316972  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:56.454907  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:56.638506  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:56.816828  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:56.954231  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:57.142443  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:57.316331  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:57.454199  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:57.638226  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:57.819071  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:57.954879  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:58.141199  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:58.316717  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:58.454675  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:58.638049  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:58.816888  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:58.955106  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:59.138239  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:59.320252  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:59.455621  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:32:59.639943  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:32:59.817905  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:32:59.955221  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:00.147307  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:00.317885  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:00.455212  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:00.638121  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:00.817108  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:00.954551  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:01.145041  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:01.316773  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:01.454500  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:01.638941  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:01.816521  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:01.954518  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:02.138813  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:02.317410  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:02.454527  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:02.638633  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:02.817285  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:02.954719  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:03.138841  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:03.317752  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:03.454944  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:03.641097  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:03.816713  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:03.956064  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:04.139422  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:04.317740  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:04.455317  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:04.637872  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:04.817262  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:04.957793  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:05.138984  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:05.317368  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:05.454923  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:05.638863  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:05.822170  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:05.954655  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:06.147982  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:06.316651  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:06.454420  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:06.639150  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:06.819812  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:06.953859  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:07.143260  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:07.317092  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:07.453790  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:07.638288  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:07.816594  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:07.954462  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:08.138917  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:08.317481  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:08.455057  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:08.639652  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:08.817443  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:08.954676  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:09.140721  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:09.316761  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:09.454943  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:09.640147  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:09.817105  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:09.955000  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:10.139261  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:10.317291  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:10.454275  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:10.640991  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:10.816463  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:10.954864  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:11.139012  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:11.316735  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:11.459360  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:11.638422  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:11.816792  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:11.955181  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:12.138779  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:12.316923  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:12.453757  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:12.639074  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:12.815895  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:12.954026  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:13.140541  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:13.321358  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:13.455006  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:13.638642  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:13.817597  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:13.954674  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:14.138580  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:14.317650  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:14.455569  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:14.638605  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:14.816896  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:14.954619  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:15.139232  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 17:33:15.316566  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:15.454877  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:15.639269  464072 kapi.go:107] duration metric: took 1m8.504141814s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0414 17:33:15.642754  464072 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-225375 cluster.
	I0414 17:33:15.645651  464072 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0414 17:33:15.648361  464072 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0414 17:33:15.816193  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:15.955984  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:16.323742  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:16.455257  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:16.817672  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:16.954646  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:17.315996  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:17.454613  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:17.816585  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:17.954467  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:18.317648  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:18.455947  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:18.824832  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:18.954436  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:19.317318  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:19.453950  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:19.818586  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:19.954601  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:20.326365  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:20.454548  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:20.820717  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:20.955127  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:21.317731  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:21.454978  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:21.817246  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:21.954576  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:22.317968  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:22.454785  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:22.817447  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:22.954753  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:23.318530  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:23.454823  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:23.817429  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:23.954553  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:24.316894  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:24.454538  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:24.817339  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:24.955095  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:25.317952  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:25.454663  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:25.817983  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:25.954220  464072 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 17:33:26.316805  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:26.454209  464072 kapi.go:107] duration metric: took 1m23.003259997s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0414 17:33:26.816906  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:27.316958  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:27.817191  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:28.317037  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:28.817477  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:29.316669  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:29.817083  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:30.316915  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:30.816345  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:31.318095  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:31.817752  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:32.316568  464072 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 17:33:32.816426  464072 kapi.go:107] duration metric: took 1m29.003436984s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0414 17:33:32.820463  464072 out.go:177] * Enabled addons: , nvidia-device-plugin, cloud-spanner, ingress-dns, inspektor-gadget, metrics-server, amd-gpu-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0414 17:33:32.823355  464072 addons.go:514] duration metric: took 1m36.61613936s for enable addons: enabled=[ nvidia-device-plugin cloud-spanner ingress-dns inspektor-gadget metrics-server amd-gpu-device-plugin yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0414 17:33:32.823413  464072 start.go:246] waiting for cluster config update ...
	E0414 17:33:32.823440  464072 addons.go:591] store failed:  is not a valid addon
	I0414 17:33:32.823470  464072 start.go:255] writing updated cluster config ...
	I0414 17:33:32.824806  464072 ssh_runner.go:195] Run: rm -f paused
	I0414 17:33:33.227544  464072 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:33:33.231317  464072 out.go:177] * Done! kubectl is now configured to use "addons-225375" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 17:36:50 addons-225375 crio[986]: time="2025-04-14 17:36:50.863879533Z" level=info msg="Removed pod sandbox: 3099dd786e7f28f0d8816d0194c30e25157d8118f041ba3d421b505f2abb8ae2" id=deb4c3bf-026f-4d16-a30f-c1d0fcb5fd64 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.500155354Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-5wjlq/POD" id=37eb4f9a-aa40-4074-863a-d4b51099a376 name=/runtime.v1.RuntimeService/RunPodSandbox
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.500213003Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.556799720Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-5wjlq Namespace:default ID:42ee581622a401de78350248fa69a093e548a50538a64e36a76d00c02f8fc60b UID:b0321f8e-0eae-43e1-bf1b-2900d97b9f72 NetNS:/var/run/netns/ef528882-3cd3-4767-830f-36d2754d4537 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.556849354Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-5wjlq to CNI network \"kindnet\" (type=ptp)"
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.569355662Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-5wjlq Namespace:default ID:42ee581622a401de78350248fa69a093e548a50538a64e36a76d00c02f8fc60b UID:b0321f8e-0eae-43e1-bf1b-2900d97b9f72 NetNS:/var/run/netns/ef528882-3cd3-4767-830f-36d2754d4537 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.569509698Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-5wjlq for CNI network kindnet (type=ptp)"
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.574496710Z" level=info msg="Ran pod sandbox 42ee581622a401de78350248fa69a093e548a50538a64e36a76d00c02f8fc60b with infra container: default/hello-world-app-7d9564db4-5wjlq/POD" id=37eb4f9a-aa40-4074-863a-d4b51099a376 name=/runtime.v1.RuntimeService/RunPodSandbox
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.575463325Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=882f121f-6e94-4ef9-bf2c-90a5bd64f795 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.575713576Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=882f121f-6e94-4ef9-bf2c-90a5bd64f795 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.576325561Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=402918de-f65f-4cbd-92ed-f21cdff45115 name=/runtime.v1.ImageService/PullImage
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.578938387Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Apr 14 17:36:51 addons-225375 crio[986]: time="2025-04-14 17:36:51.863761542Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.554385305Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=402918de-f65f-4cbd-92ed-f21cdff45115 name=/runtime.v1.ImageService/PullImage
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.555167188Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1a1f8ef7-59b6-4f2f-bf2f-2074791b01e8 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.556183912Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1a1f8ef7-59b6-4f2f-bf2f-2074791b01e8 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.558964083Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f5436a1d-995d-444c-ab40-baeb79881941 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.559590149Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f5436a1d-995d-444c-ab40-baeb79881941 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.562410510Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-5wjlq/hello-world-app" id=a421db91-54ab-40c0-b2c8-0ec3bd2dfcbd name=/runtime.v1.RuntimeService/CreateContainer
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.562503598Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.588139128Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4e67a11cf45e178c2a25cb874cbf254e48f3c0611e8a36a240951a094d37b4fc/merged/etc/passwd: no such file or directory"
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.588394138Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4e67a11cf45e178c2a25cb874cbf254e48f3c0611e8a36a240951a094d37b4fc/merged/etc/group: no such file or directory"
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.655450112Z" level=info msg="Created container f69529303da0523e633070b5b466f99776ddaf2d5effa8c7a9b228e144141e66: default/hello-world-app-7d9564db4-5wjlq/hello-world-app" id=a421db91-54ab-40c0-b2c8-0ec3bd2dfcbd name=/runtime.v1.RuntimeService/CreateContainer
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.656187006Z" level=info msg="Starting container: f69529303da0523e633070b5b466f99776ddaf2d5effa8c7a9b228e144141e66" id=04844cd7-1109-4b47-bcab-d75003fafd66 name=/runtime.v1.RuntimeService/StartContainer
	Apr 14 17:36:52 addons-225375 crio[986]: time="2025-04-14 17:36:52.673945190Z" level=info msg="Started container" PID=8444 containerID=f69529303da0523e633070b5b466f99776ddaf2d5effa8c7a9b228e144141e66 description=default/hello-world-app-7d9564db4-5wjlq/hello-world-app id=04844cd7-1109-4b47-bcab-d75003fafd66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=42ee581622a401de78350248fa69a093e548a50538a64e36a76d00c02f8fc60b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	f69529303da05       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   42ee581622a40       hello-world-app-7d9564db4-5wjlq
	e48e562b4d883       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago            Running             nginx                     0                   4c08d4ad80ce4       nginx
	9e3fac2b736cc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   07eae9e2cfa30       busybox
	acd574b3eaf4c       registry.k8s.io/ingress-nginx/controller@sha256:8f46d9aff787f5fb96217cea9f185e83eaaec72e992328231e662972b9735cb7             3 minutes ago            Running             controller                0                   21b555f5b0d88       ingress-nginx-controller-7fd88b9777-2kz27
	68c5e0af48a35       641fb937f707171f80acdea24eb6604eeb8ac839873a759b66bb2d979bf6a872                                                             3 minutes ago            Exited              patch                     2                   639992b31dde6       ingress-nginx-admission-patch-d7klk
	0bb2245705fca       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:5966745a8562819f7e185eeee4bf54ae59be46aef88ac7b47c434357894a122c   3 minutes ago            Exited              create                    0                   f66b029ccd676       ingress-nginx-admission-create-d6c7w
	3a9ec6fb4cf8e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             4 minutes ago            Running             minikube-ingress-dns      0                   e0470b7c4a977       kube-ingress-dns-minikube
	4a4f6dd7eaeff       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             4 minutes ago            Running             coredns                   0                   189888077f1a7       coredns-668d6bf9bc-hgkqw
	bc835ca2b911b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago            Running             storage-provisioner       0                   f300298e7de7a       storage-provisioner
	0984f510a5536       docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955                           4 minutes ago            Running             kindnet-cni               0                   e06485fe8d63c       kindnet-krk7g
	a6e5c33a36975       e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062                                                             4 minutes ago            Running             kube-proxy                0                   10d72897b504b       kube-proxy-7r8l6
	028a52a56f74d       6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32                                                             5 minutes ago            Running             kube-apiserver            0                   6cb02c5a0f134       kube-apiserver-addons-225375
	6f7987f86bbe3       3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d                                                             5 minutes ago            Running             kube-controller-manager   0                   30aaedef060eb       kube-controller-manager-addons-225375
	d9e820fdd8bbd       82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911                                                             5 minutes ago            Running             kube-scheduler            0                   7239d102bb7c5       kube-scheduler-addons-225375
	a09cfe61fc8a0       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                             5 minutes ago            Running             etcd                      0                   2bdbe7b97980f       etcd-addons-225375
	
	
	==> coredns [4a4f6dd7eaeff7bf34dfce33a55946ff3e39b2010b6dc8719368207d30eaefdd] <==
	[INFO] 10.244.0.11:54317 - 21839 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001590389s
	[INFO] 10.244.0.11:54317 - 36372 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000120172s
	[INFO] 10.244.0.11:54317 - 8307 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000080542s
	[INFO] 10.244.0.11:53082 - 11719 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150721s
	[INFO] 10.244.0.11:53082 - 13048 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000217429s
	[INFO] 10.244.0.11:48890 - 57641 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101572s
	[INFO] 10.244.0.11:48890 - 57461 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000157843s
	[INFO] 10.244.0.11:57120 - 57813 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100235s
	[INFO] 10.244.0.11:57120 - 57368 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001803s
	[INFO] 10.244.0.11:54337 - 17343 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00159002s
	[INFO] 10.244.0.11:54337 - 16906 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001625655s
	[INFO] 10.244.0.11:38282 - 4488 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014725s
	[INFO] 10.244.0.11:38282 - 4668 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000138232s
	[INFO] 10.244.0.20:40712 - 27990 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181547s
	[INFO] 10.244.0.20:52515 - 18526 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000132875s
	[INFO] 10.244.0.20:53125 - 47406 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112838s
	[INFO] 10.244.0.20:54355 - 48450 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00008421s
	[INFO] 10.244.0.20:55722 - 46567 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000162372s
	[INFO] 10.244.0.20:34680 - 50019 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000250798s
	[INFO] 10.244.0.20:48394 - 42486 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002471623s
	[INFO] 10.244.0.20:52981 - 26419 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002019388s
	[INFO] 10.244.0.20:47678 - 37693 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005948657s
	[INFO] 10.244.0.20:54610 - 57232 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00587385s
	[INFO] 10.244.0.24:33159 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000219422s
	[INFO] 10.244.0.24:42796 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000167812s
	
	
	==> describe nodes <==
	Name:               addons-225375
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-225375
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d9971b005454362d638ce6593a2c72bc063c6f0
	                    minikube.k8s.io/name=addons-225375
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T17_31_51_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-225375
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 17:31:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-225375
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 17:36:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 17:35:34 +0000   Mon, 14 Apr 2025 17:31:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 17:35:34 +0000   Mon, 14 Apr 2025 17:31:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 17:35:34 +0000   Mon, 14 Apr 2025 17:31:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 17:35:34 +0000   Mon, 14 Apr 2025 17:32:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-225375
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a13ea6d4b4bd4db18d1bb11db32b94ad
	  System UUID:                89b3a1bc-9529-4cda-9022-f3caddd33a13
	  Boot ID:                    01bf700a-ee0f-4033-b408-c761220aa5f7
	  Kernel Version:             5.15.0-1081-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	  default                     hello-world-app-7d9564db4-5wjlq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-7fd88b9777-2kz27    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m50s
	  kube-system                 coredns-668d6bf9bc-hgkqw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m57s
	  kube-system                 etcd-addons-225375                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m3s
	  kube-system                 kindnet-krk7g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m58s
	  kube-system                 kube-apiserver-addons-225375                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-controller-manager-addons-225375        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-7r8l6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-scheduler-addons-225375                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m52s                  kube-proxy       
	  Normal   Starting                 5m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node addons-225375 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m10s (x8 over 5m10s)  kubelet          Node addons-225375 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m10s (x8 over 5m10s)  kubelet          Node addons-225375 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m3s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m3s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m3s                   kubelet          Node addons-225375 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s                   kubelet          Node addons-225375 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s                   kubelet          Node addons-225375 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m59s                  node-controller  Node addons-225375 event: Registered Node addons-225375 in Controller
	  Normal   NodeReady                4m41s                  kubelet          Node addons-225375 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr14 17:02] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [a09cfe61fc8a0d3ced8d6aaf31fe14de2a583c730a8a7d356a2b1493c5345d94] <==
	{"level":"warn","ts":"2025-04-14T17:32:00.352117Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T17:31:59.729218Z","time spent":"622.893054ms","remote":"127.0.0.1:59236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" limit:1 "}
	{"level":"info","ts":"2025-04-14T17:32:00.352252Z","caller":"traceutil/trace.go:171","msg":"trace[203759420] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"382.164057ms","start":"2025-04-14T17:31:59.970081Z","end":"2025-04-14T17:32:00.352245Z","steps":["trace[203759420] 'process raft request'  (duration: 326.784308ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T17:32:00.352749Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T17:31:59.970063Z","time spent":"382.205526ms","remote":"127.0.0.1:59482","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3245,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:399 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3174 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >"}
	{"level":"info","ts":"2025-04-14T17:32:00.353635Z","caller":"traceutil/trace.go:171","msg":"trace[47155106] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"338.176005ms","start":"2025-04-14T17:32:00.015447Z","end":"2025-04-14T17:32:00.353623Z","steps":["trace[47155106] 'process raft request'  (duration: 338.055167ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T17:32:00.353711Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T17:32:00.015178Z","time spent":"338.495907ms","remote":"127.0.0.1:59120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":689,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns.18363fad6d2dc7b2\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns.18363fad6d2dc7b2\" value_size:618 lease:8128036583407714932 >> failure:<>"}
	{"level":"info","ts":"2025-04-14T17:32:00.474342Z","caller":"traceutil/trace.go:171","msg":"trace[20050068] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"177.021704ms","start":"2025-04-14T17:32:00.297292Z","end":"2025-04-14T17:32:00.474313Z","steps":["trace[20050068] 'process raft request'  (duration: 117.771599ms)","trace[20050068] 'compare'  (duration: 58.906982ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T17:32:00.477832Z","caller":"traceutil/trace.go:171","msg":"trace[676844622] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"180.293543ms","start":"2025-04-14T17:32:00.297521Z","end":"2025-04-14T17:32:00.477815Z","steps":["trace[676844622] 'process raft request'  (duration: 176.549071ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T17:32:00.480439Z","caller":"traceutil/trace.go:171","msg":"trace[1538924315] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"182.795526ms","start":"2025-04-14T17:32:00.297630Z","end":"2025-04-14T17:32:00.480425Z","steps":["trace[1538924315] 'process raft request'  (duration: 181.779941ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T17:32:00.483032Z","caller":"traceutil/trace.go:171","msg":"trace[718464341] linearizableReadLoop","detail":"{readStateIndex:423; appliedIndex:419; }","duration":"129.413684ms","start":"2025-04-14T17:32:00.353602Z","end":"2025-04-14T17:32:00.483015Z","steps":["trace[718464341] 'read index received'  (duration: 61.418703ms)","trace[718464341] 'applied index is now lower than readState.Index'  (duration: 67.992708ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T17:32:00.488790Z","caller":"traceutil/trace.go:171","msg":"trace[1383300589] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"111.838631ms","start":"2025-04-14T17:32:00.376929Z","end":"2025-04-14T17:32:00.488767Z","steps":["trace[1383300589] 'process raft request'  (duration: 102.967043ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T17:32:00.493416Z","caller":"traceutil/trace.go:171","msg":"trace[34005552] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"129.619863ms","start":"2025-04-14T17:32:00.351591Z","end":"2025-04-14T17:32:00.481211Z","steps":["trace[34005552] 'process raft request'  (duration: 127.917769ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T17:32:00.494120Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.40057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T17:32:00.494185Z","caller":"traceutil/trace.go:171","msg":"trace[2087214890] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:412; }","duration":"196.452444ms","start":"2025-04-14T17:32:00.297700Z","end":"2025-04-14T17:32:00.494152Z","steps":["trace[2087214890] 'agreement among raft nodes before linearized reading'  (duration: 196.36976ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T17:32:00.494387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.485294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T17:32:00.494433Z","caller":"traceutil/trace.go:171","msg":"trace[54777754] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:0; response_revision:412; }","duration":"142.519575ms","start":"2025-04-14T17:32:00.351891Z","end":"2025-04-14T17:32:00.494411Z","steps":["trace[54777754] 'agreement among raft nodes before linearized reading'  (duration: 142.471255ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T17:32:00.494550Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.672175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T17:32:00.494594Z","caller":"traceutil/trace.go:171","msg":"trace[1873769221] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:0; response_revision:412; }","duration":"142.6967ms","start":"2025-04-14T17:32:00.351871Z","end":"2025-04-14T17:32:00.494567Z","steps":["trace[1873769221] 'agreement among raft nodes before linearized reading'  (duration: 142.662066ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T17:32:00.499633Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.049507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T17:32:00.499688Z","caller":"traceutil/trace.go:171","msg":"trace[1762492051] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:412; }","duration":"148.133807ms","start":"2025-04-14T17:32:00.351539Z","end":"2025-04-14T17:32:00.499673Z","steps":["trace[1762492051] 'agreement among raft nodes before linearized reading'  (duration: 148.01256ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T17:32:00.499912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.853258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T17:32:00.499937Z","caller":"traceutil/trace.go:171","msg":"trace[204833693] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:412; }","duration":"201.882296ms","start":"2025-04-14T17:32:00.298048Z","end":"2025-04-14T17:32:00.499930Z","steps":["trace[204833693] 'agreement among raft nodes before linearized reading'  (duration: 201.838235ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T17:32:00.500760Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.317583ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T17:32:00.500799Z","caller":"traceutil/trace.go:171","msg":"trace[1294306066] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:412; }","duration":"203.375503ms","start":"2025-04-14T17:32:00.297414Z","end":"2025-04-14T17:32:00.500789Z","steps":["trace[1294306066] 'agreement among raft nodes before linearized reading'  (duration: 203.30484ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T17:32:00.501968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.986391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-04-14T17:32:00.502014Z","caller":"traceutil/trace.go:171","msg":"trace[1260345325] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:412; }","duration":"124.938944ms","start":"2025-04-14T17:32:00.377065Z","end":"2025-04-14T17:32:00.502004Z","steps":["trace[1260345325] 'agreement among raft nodes before linearized reading'  (duration: 123.935593ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:36:53 up  2:19,  0 users,  load average: 0.55, 1.76, 2.80
	Linux addons-225375 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0984f510a553688dc2a22b56edf2b01084713c6507f25cefc87c14d99a275b1b] <==
	I0414 17:34:52.640102       1 main.go:301] handling current node
	I0414 17:35:02.640437       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:35:02.640478       1 main.go:301] handling current node
	I0414 17:35:12.640509       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:35:12.640551       1 main.go:301] handling current node
	I0414 17:35:22.640682       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:35:22.640719       1 main.go:301] handling current node
	I0414 17:35:32.642385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:35:32.642417       1 main.go:301] handling current node
	I0414 17:35:42.643347       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:35:42.643383       1 main.go:301] handling current node
	I0414 17:35:52.648105       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:35:52.648135       1 main.go:301] handling current node
	I0414 17:36:02.639904       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:36:02.639942       1 main.go:301] handling current node
	I0414 17:36:12.639849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:36:12.639884       1 main.go:301] handling current node
	I0414 17:36:22.646390       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:36:22.646424       1 main.go:301] handling current node
	I0414 17:36:32.641383       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:36:32.641418       1 main.go:301] handling current node
	I0414 17:36:42.639822       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:36:42.639856       1 main.go:301] handling current node
	I0414 17:36:52.640304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 17:36:52.640338       1 main.go:301] handling current node
	
	
	==> kube-apiserver [028a52a56f74d9ecb5cfe9b10e03433aab1e03829f24fff9377f4b6d3f601ff6] <==
	I0414 17:32:39.850442       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0414 17:33:45.394170       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43830: use of closed network connection
	E0414 17:33:45.645184       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43846: use of closed network connection
	I0414 17:33:55.231831       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.159.207"}
	I0414 17:34:26.538771       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0414 17:34:27.574897       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0414 17:34:32.131800       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0414 17:34:32.437604       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.194.20"}
	I0414 17:34:34.867497       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0414 17:34:40.731804       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0414 17:34:49.190177       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 17:34:49.190233       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 17:34:49.209099       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 17:34:49.209220       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 17:34:49.242962       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 17:34:49.243650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 17:34:49.280969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 17:34:49.281386       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 17:34:49.285103       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 17:34:49.285415       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0414 17:34:50.281109       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0414 17:34:50.286107       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0414 17:34:50.323981       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	E0414 17:35:37.825562       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0414 17:36:51.456650       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.16.182"}
	
	
	==> kube-controller-manager [6f7987f86bbe3e175bd20e72e959a3e1a1839ad896d1449aaff8e7af9031be15] <==
	E0414 17:36:02.032882       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 17:36:07.081143       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 17:36:07.082174       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0414 17:36:07.083137       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 17:36:07.083233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0414 17:36:10.188797       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0414 17:36:10.703906       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-7dc7f9b5b8" duration="10.741µs"
	W0414 17:36:14.507919       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 17:36:14.508990       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0414 17:36:14.510055       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 17:36:14.510090       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 17:36:41.701986       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 17:36:41.703056       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0414 17:36:41.703980       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 17:36:41.704018       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 17:36:43.742291       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 17:36:43.743323       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0414 17:36:43.744884       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 17:36:43.744922       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0414 17:36:51.193880       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="35.843321ms"
	I0414 17:36:51.217599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="23.664122ms"
	I0414 17:36:51.232188       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="14.541594ms"
	I0414 17:36:51.232275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="50.363µs"
	I0414 17:36:53.285283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="13.719398ms"
	I0414 17:36:53.285428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="37.449µs"
	
	
	==> kube-proxy [a6e5c33a369750a6e2db96183781d2c58fd3a16737c7fdb9f1b260e7a1cef204] <==
	I0414 17:31:56.548808       1 server_linux.go:66] "Using iptables proxy"
	I0414 17:31:56.688414       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0414 17:31:56.688588       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 17:31:59.018308       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0414 17:31:59.199895       1 server_linux.go:170] "Using iptables Proxier"
	I0414 17:32:00.618760       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 17:32:00.619097       1 server.go:497] "Version info" version="v1.32.2"
	I0414 17:32:00.619115       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 17:32:00.620463       1 config.go:199] "Starting service config controller"
	I0414 17:32:00.646831       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 17:32:00.628640       1 config.go:105] "Starting endpoint slice config controller"
	I0414 17:32:00.647294       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 17:32:00.630807       1 config.go:329] "Starting node config controller"
	I0414 17:32:00.647598       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 17:32:01.031218       1 shared_informer.go:320] Caches are synced for node config
	I0414 17:32:01.031608       1 shared_informer.go:320] Caches are synced for service config
	I0414 17:32:01.031682       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d9e820fdd8bbd6cee0632eac10ef9ab0c19d597a90488eac07c101d641069f0e] <==
	W0414 17:31:47.815704       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0414 17:31:47.816268       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:47.815731       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0414 17:31:47.816354       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:47.815759       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0414 17:31:47.816441       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:47.815788       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0414 17:31:47.816541       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:47.815829       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0414 17:31:47.816632       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:48.682998       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0414 17:31:48.683122       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:48.705387       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0414 17:31:48.705504       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:48.715774       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 17:31:48.715889       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:48.774903       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 17:31:48.775048       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:48.783027       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 17:31:48.783147       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:48.835774       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 17:31:48.835889       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 17:31:49.230533       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 17:31:49.230653       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0414 17:31:51.595349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.460860    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ee806504b02c67578af3bc6bf55afb71ae0062b32a696e349102cf15751aae04/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ee806504b02c67578af3bc6bf55afb71ae0062b32a696e349102cf15751aae04/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.464060    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ee806504b02c67578af3bc6bf55afb71ae0062b32a696e349102cf15751aae04/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ee806504b02c67578af3bc6bf55afb71ae0062b32a696e349102cf15751aae04/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.465165    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/090d67dbbc4c5b2ac1812eadba4fc77e496c3fc3862c22492a6accf8de9d717b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/090d67dbbc4c5b2ac1812eadba4fc77e496c3fc3862c22492a6accf8de9d717b/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.466276    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d5432242779771ce5d1d21676b899635c9452fd2be4c9350814a381355de1be3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d5432242779771ce5d1d21676b899635c9452fd2be4c9350814a381355de1be3/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.499965    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2dc1535e8d5417370d846f61bf6203eca5134b995f39e842bea9f5edc286a7cb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2dc1535e8d5417370d846f61bf6203eca5134b995f39e842bea9f5edc286a7cb/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.547382    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/62b1cb6b920963c9312036233addcaae386db1275e682d627e184a83cdfcb767/diff" to get inode usage: stat /var/lib/containers/storage/overlay/62b1cb6b920963c9312036233addcaae386db1275e682d627e184a83cdfcb767/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.550701    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8b10e47ad963cb961fafb20551563e7363d0200f9bc6ef8936a205711b7c36de/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8b10e47ad963cb961fafb20551563e7363d0200f9bc6ef8936a205711b7c36de/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.552892    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a4050f44e9113e0e7d86b32685b5779c8df185fbf0007fb494a6bd6d6807967b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a4050f44e9113e0e7d86b32685b5779c8df185fbf0007fb494a6bd6d6807967b/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.555083    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a4050f44e9113e0e7d86b32685b5779c8df185fbf0007fb494a6bd6d6807967b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a4050f44e9113e0e7d86b32685b5779c8df185fbf0007fb494a6bd6d6807967b/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.556196    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/233a0b0aaa2b438b5194fb0d5843f02f7c198eada24816744600b1ad7113f224/diff" to get inode usage: stat /var/lib/containers/storage/overlay/233a0b0aaa2b438b5194fb0d5843f02f7c198eada24816744600b1ad7113f224/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.559376    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/233a0b0aaa2b438b5194fb0d5843f02f7c198eada24816744600b1ad7113f224/diff" to get inode usage: stat /var/lib/containers/storage/overlay/233a0b0aaa2b438b5194fb0d5843f02f7c198eada24816744600b1ad7113f224/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.561567    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a875b0693798d6ca2740106b324cb34927d0134831144289afcfc6466ae7887f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a875b0693798d6ca2740106b324cb34927d0134831144289afcfc6466ae7887f/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.570606    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b40b98eb366bb098bb8009a796e1e6cb64953fa842064410e517617577d6636f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b40b98eb366bb098bb8009a796e1e6cb64953fa842064410e517617577d6636f/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.570813    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/11e7519312dd5d5bbf30ab1d41d27f3283b50bf161dd574d0f128da7b7f02902/diff" to get inode usage: stat /var/lib/containers/storage/overlay/11e7519312dd5d5bbf30ab1d41d27f3283b50bf161dd574d0f128da7b7f02902/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.581091    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/29b9d5cb120a5f8074200441fc9b854b48ff27d9d549781ef31a1b9f013465ea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/29b9d5cb120a5f8074200441fc9b854b48ff27d9d549781ef31a1b9f013465ea/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.582240    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/11e7519312dd5d5bbf30ab1d41d27f3283b50bf161dd574d0f128da7b7f02902/diff" to get inode usage: stat /var/lib/containers/storage/overlay/11e7519312dd5d5bbf30ab1d41d27f3283b50bf161dd574d0f128da7b7f02902/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.586492    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/aabaae5512ba4734eb838cb00e7eb67e265530a663cb9bd9ca11db7053655ea1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/aabaae5512ba4734eb838cb00e7eb67e265530a663cb9bd9ca11db7053655ea1/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.587634    1523 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/29b9d5cb120a5f8074200441fc9b854b48ff27d9d549781ef31a1b9f013465ea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/29b9d5cb120a5f8074200441fc9b854b48ff27d9d549781ef31a1b9f013465ea/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.609447    1523 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744652210609215004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605550,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 17:36:50 addons-225375 kubelet[1523]: E0414 17:36:50.609643    1523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744652210609215004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605550,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 17:36:51 addons-225375 kubelet[1523]: I0414 17:36:51.198498    1523 memory_manager.go:355] "RemoveStaleState removing state" podUID="5a498477-9df4-40e8-94ca-f734be68b25d" containerName="cloud-spanner-emulator"
	Apr 14 17:36:51 addons-225375 kubelet[1523]: I0414 17:36:51.198541    1523 memory_manager.go:355] "RemoveStaleState removing state" podUID="2f8c6130-4c9b-4b56-a831-91d9397b21f5" containerName="helper-pod"
	Apr 14 17:36:51 addons-225375 kubelet[1523]: I0414 17:36:51.198549    1523 memory_manager.go:355] "RemoveStaleState removing state" podUID="dac79c50-fe8d-4e41-bf74-d85b7ba6cd1d" containerName="local-path-provisioner"
	Apr 14 17:36:51 addons-225375 kubelet[1523]: I0414 17:36:51.296126    1523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfkkp\" (UniqueName: \"kubernetes.io/projected/b0321f8e-0eae-43e1-bf1b-2900d97b9f72-kube-api-access-wfkkp\") pod \"hello-world-app-7d9564db4-5wjlq\" (UID: \"b0321f8e-0eae-43e1-bf1b-2900d97b9f72\") " pod="default/hello-world-app-7d9564db4-5wjlq"
	Apr 14 17:36:51 addons-225375 kubelet[1523]: W0414 17:36:51.572598    1523 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bc29855cba0362dc60248bdd155272d4f278831d6cbbcf4dacc69f89eca4d473/crio-42ee581622a401de78350248fa69a093e548a50538a64e36a76d00c02f8fc60b WatchSource:0}: Error finding container 42ee581622a401de78350248fa69a093e548a50538a64e36a76d00c02f8fc60b: Status 404 returned error can't find the container with id 42ee581622a401de78350248fa69a093e548a50538a64e36a76d00c02f8fc60b
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-225375 -n addons-225375
helpers_test.go:261: (dbg) Run:  kubectl --context addons-225375 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-d6c7w ingress-nginx-admission-patch-d7klk
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-225375 describe pod ingress-nginx-admission-create-d6c7w ingress-nginx-admission-patch-d7klk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-225375 describe pod ingress-nginx-admission-create-d6c7w ingress-nginx-admission-patch-d7klk: exit status 1 (79.831854ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-d6c7w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d7klk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-225375 describe pod ingress-nginx-admission-create-d6c7w ingress-nginx-admission-patch-d7klk: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-225375 addons disable ingress --alsologtostderr -v=1: (7.771316991s)
--- FAIL: TestAddons/parallel/Ingress (151.29s)

                                                
                                    

Test pass (298/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.28
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.2/json-events 5.21
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.09
18 TestDownloadOnly/v1.32.2/DeleteAll 0.21
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 151.65
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/FakeCredentials 11.99
35 TestAddons/parallel/Registry 18.57
37 TestAddons/parallel/InspektorGadget 12.04
38 TestAddons/parallel/MetricsServer 6.83
40 TestAddons/parallel/CSI 44.25
41 TestAddons/parallel/Headlamp 17.93
42 TestAddons/parallel/CloudSpanner 5.53
43 TestAddons/parallel/LocalPath 51.41
44 TestAddons/parallel/NvidiaDevicePlugin 5.54
45 TestAddons/parallel/Yakd 11.74
47 TestAddons/StoppedEnableDisable 12.23
48 TestCertOptions 33.61
49 TestCertExpiration 241.64
51 TestForceSystemdFlag 34.42
52 TestForceSystemdEnv 33.33
58 TestErrorSpam/setup 30.3
59 TestErrorSpam/start 0.76
60 TestErrorSpam/status 1.05
61 TestErrorSpam/pause 1.79
62 TestErrorSpam/unpause 1.81
63 TestErrorSpam/stop 1.44
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 48.56
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.76
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.5
75 TestFunctional/serial/CacheCmd/cache/add_local 1.41
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.97
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.16
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 35.51
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.77
86 TestFunctional/serial/LogsFileCmd 1.78
87 TestFunctional/serial/InvalidService 4.03
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 7.86
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.06
97 TestFunctional/parallel/ServiceCmdConnect 10.68
98 TestFunctional/parallel/AddonsCmd 0.22
99 TestFunctional/parallel/PersistentVolumeClaim 25.21
101 TestFunctional/parallel/SSHCmd 0.66
102 TestFunctional/parallel/CpCmd 2.4
104 TestFunctional/parallel/FileSync 0.42
105 TestFunctional/parallel/CertSync 1.96
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.91
113 TestFunctional/parallel/License 0.86
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.49
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.25
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.41
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
129 TestFunctional/parallel/ServiceCmd/List 0.63
130 TestFunctional/parallel/MountCmd/any-port 9.79
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
133 TestFunctional/parallel/ServiceCmd/Format 0.54
134 TestFunctional/parallel/ServiceCmd/URL 0.52
135 TestFunctional/parallel/MountCmd/specific-port 2.4
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.49
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.25
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.72
144 TestFunctional/parallel/ImageCommands/Setup 0.71
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.64
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.12
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.59
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.84
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 180.02
163 TestMultiControlPlane/serial/DeployApp 9.09
164 TestMultiControlPlane/serial/PingHostFromPods 1.53
165 TestMultiControlPlane/serial/AddWorkerNode 38.29
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.97
168 TestMultiControlPlane/serial/CopyFile 19.15
169 TestMultiControlPlane/serial/StopSecondaryNode 12.75
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
171 TestMultiControlPlane/serial/RestartSecondaryNode 33.49
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.42
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 173.81
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.58
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
176 TestMultiControlPlane/serial/StopCluster 35.79
177 TestMultiControlPlane/serial/RestartCluster 97.48
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
179 TestMultiControlPlane/serial/AddSecondaryNode 73.09
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
184 TestJSONOutput/start/Command 47.83
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.69
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.66
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.89
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.24
209 TestKicCustomNetwork/create_custom_network 40.64
210 TestKicCustomNetwork/use_default_bridge_network 32.74
211 TestKicExistingNetwork 34.74
212 TestKicCustomSubnet 31.04
213 TestKicStaticIP 33.25
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 71.61
218 TestMountStart/serial/StartWithMountFirst 6.56
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 8.21
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.62
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.19
225 TestMountStart/serial/RestartStopped 7.51
226 TestMountStart/serial/VerifyMountPostStop 0.26
229 TestMultiNode/serial/FreshStart2Nodes 81.57
230 TestMultiNode/serial/DeployApp2Nodes 6.51
231 TestMultiNode/serial/PingHostFrom2Pods 1.01
232 TestMultiNode/serial/AddNode 31.83
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.66
235 TestMultiNode/serial/CopyFile 9.98
236 TestMultiNode/serial/StopNode 2.24
237 TestMultiNode/serial/StartAfterStop 9.79
238 TestMultiNode/serial/RestartKeepsNodes 88.28
239 TestMultiNode/serial/DeleteNode 5.28
240 TestMultiNode/serial/StopMultiNode 23.92
241 TestMultiNode/serial/RestartMultiNode 62.96
242 TestMultiNode/serial/ValidateNameConflict 32.1
247 TestPreload 126.58
249 TestScheduledStopUnix 107.77
252 TestInsufficientStorage 10.91
253 TestRunningBinaryUpgrade 109.08
255 TestKubernetesUpgrade 389.92
256 TestMissingContainerUpgrade 183.66
258 TestPause/serial/Start 61
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
261 TestNoKubernetes/serial/StartWithK8s 39.25
262 TestNoKubernetes/serial/StartWithStopK8s 19.01
263 TestNoKubernetes/serial/Start 6.38
264 TestPause/serial/SecondStartNoReconfiguration 43.86
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
266 TestNoKubernetes/serial/ProfileList 0.98
267 TestNoKubernetes/serial/Stop 1.21
268 TestNoKubernetes/serial/StartNoArgs 6.6
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
270 TestPause/serial/Pause 1.08
271 TestPause/serial/VerifyStatus 0.41
272 TestPause/serial/Unpause 0.84
273 TestPause/serial/PauseAgain 0.95
274 TestPause/serial/DeletePaused 2.97
275 TestPause/serial/VerifyDeletedResources 0.24
276 TestStoppedBinaryUpgrade/Setup 1
277 TestStoppedBinaryUpgrade/Upgrade 67.53
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
293 TestNetworkPlugins/group/false 3.78
298 TestStartStop/group/old-k8s-version/serial/FirstStart 151.61
300 TestStartStop/group/no-preload/serial/FirstStart 66.02
301 TestStartStop/group/no-preload/serial/DeployApp 9.37
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.49
303 TestStartStop/group/no-preload/serial/Stop 12.01
304 TestStartStop/group/old-k8s-version/serial/DeployApp 9.53
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
306 TestStartStop/group/no-preload/serial/SecondStart 282.9
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.37
308 TestStartStop/group/old-k8s-version/serial/Stop 12.15
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
310 TestStartStop/group/old-k8s-version/serial/SecondStart 146.05
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
314 TestStartStop/group/old-k8s-version/serial/Pause 2.98
316 TestStartStop/group/embed-certs/serial/FirstStart 53.41
317 TestStartStop/group/embed-certs/serial/DeployApp 10.36
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
319 TestStartStop/group/embed-certs/serial/Stop 11.96
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/embed-certs/serial/SecondStart 267.11
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
325 TestStartStop/group/no-preload/serial/Pause 3.06
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.37
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.35
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.96
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 278.65
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
336 TestStartStop/group/embed-certs/serial/Pause 3.11
338 TestStartStop/group/newest-cni/serial/FirstStart 37.16
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
341 TestStartStop/group/newest-cni/serial/Stop 1.26
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
343 TestStartStop/group/newest-cni/serial/SecondStart 16.68
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
347 TestStartStop/group/newest-cni/serial/Pause 3.12
348 TestNetworkPlugins/group/auto/Start 50.95
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
350 TestNetworkPlugins/group/auto/KubeletFlags 0.3
351 TestNetworkPlugins/group/auto/NetCatPod 11.29
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.32
355 TestNetworkPlugins/group/auto/DNS 0.27
356 TestNetworkPlugins/group/auto/Localhost 0.16
357 TestNetworkPlugins/group/auto/HairPin 0.19
358 TestNetworkPlugins/group/kindnet/Start 60.52
359 TestNetworkPlugins/group/calico/Start 71.32
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
362 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
363 TestNetworkPlugins/group/kindnet/DNS 0.2
364 TestNetworkPlugins/group/kindnet/Localhost 0.16
365 TestNetworkPlugins/group/kindnet/HairPin 0.18
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.44
368 TestNetworkPlugins/group/calico/NetCatPod 11.39
369 TestNetworkPlugins/group/custom-flannel/Start 58.2
370 TestNetworkPlugins/group/calico/DNS 0.39
371 TestNetworkPlugins/group/calico/Localhost 0.31
372 TestNetworkPlugins/group/calico/HairPin 0.23
373 TestNetworkPlugins/group/enable-default-cni/Start 49.42
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.4
376 TestNetworkPlugins/group/custom-flannel/DNS 0.2
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.39
381 TestNetworkPlugins/group/flannel/Start 49.1
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.3
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
385 TestNetworkPlugins/group/bridge/Start 48.98
386 TestNetworkPlugins/group/flannel/ControllerPod 6.01
387 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
388 TestNetworkPlugins/group/flannel/NetCatPod 12.52
389 TestNetworkPlugins/group/flannel/DNS 0.16
390 TestNetworkPlugins/group/flannel/Localhost 0.15
391 TestNetworkPlugins/group/flannel/HairPin 0.18
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
393 TestNetworkPlugins/group/bridge/NetCatPod 12.39
394 TestNetworkPlugins/group/bridge/DNS 0.2
395 TestNetworkPlugins/group/bridge/Localhost 0.15
396 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (8.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-890526 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-890526 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.279653751s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0414 17:30:53.860161  463312 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0414 17:30:53.860241  463312 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-890526
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-890526: exit status 85 (90.995439ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-890526 | jenkins | v1.35.0 | 14 Apr 25 17:30 UTC |          |
	|         | -p download-only-890526        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 17:30:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 17:30:45.629002  463317 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:30:45.629121  463317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:30:45.629133  463317 out.go:358] Setting ErrFile to fd 2...
	I0414 17:30:45.629138  463317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:30:45.629454  463317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
	W0414 17:30:45.629591  463317 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20201-457936/.minikube/config/config.json: open /home/jenkins/minikube-integration/20201-457936/.minikube/config/config.json: no such file or directory
	I0414 17:30:45.629992  463317 out.go:352] Setting JSON to true
	I0414 17:30:45.630930  463317 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7992,"bootTime":1744643854,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0414 17:30:45.631003  463317 start.go:139] virtualization:  
	I0414 17:30:45.634943  463317 out.go:97] [download-only-890526] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0414 17:30:45.635207  463317 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball: no such file or directory
	I0414 17:30:45.635259  463317 notify.go:220] Checking for updates...
	I0414 17:30:45.638175  463317 out.go:169] MINIKUBE_LOCATION=20201
	I0414 17:30:45.641213  463317 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:30:45.644148  463317 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	I0414 17:30:45.646891  463317 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	I0414 17:30:45.649783  463317 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0414 17:30:45.655481  463317 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 17:30:45.655722  463317 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:30:45.686018  463317 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 17:30:45.686244  463317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 17:30:45.740520  463317 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-14 17:30:45.731602322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 17:30:45.740629  463317 docker.go:318] overlay module found
	I0414 17:30:45.743646  463317 out.go:97] Using the docker driver based on user configuration
	I0414 17:30:45.743690  463317 start.go:297] selected driver: docker
	I0414 17:30:45.743703  463317 start.go:901] validating driver "docker" against <nil>
	I0414 17:30:45.743810  463317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 17:30:45.797951  463317 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-14 17:30:45.788337882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 17:30:45.798108  463317 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 17:30:45.798436  463317 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0414 17:30:45.798597  463317 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 17:30:45.801714  463317 out.go:169] Using Docker driver with root privileges
	I0414 17:30:45.804610  463317 cni.go:84] Creating CNI manager for ""
	I0414 17:30:45.804687  463317 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0414 17:30:45.804700  463317 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 17:30:45.804782  463317 start.go:340] cluster config:
	{Name:download-only-890526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-890526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:30:45.807780  463317 out.go:97] Starting "download-only-890526" primary control-plane node in "download-only-890526" cluster
	I0414 17:30:45.807816  463317 cache.go:121] Beginning downloading kic base image for docker with crio
	I0414 17:30:45.810757  463317 out.go:97] Pulling base image v0.0.46-1744107393-20604 ...
	I0414 17:30:45.810798  463317 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:30:45.810915  463317 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon
	I0414 17:30:45.827677  463317 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a to local cache
	I0414 17:30:45.827872  463317 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local cache directory
	I0414 17:30:45.827976  463317 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a to local cache
	I0414 17:30:45.869349  463317 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0414 17:30:45.869374  463317 cache.go:56] Caching tarball of preloaded images
	I0414 17:30:45.870173  463317 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:30:45.873396  463317 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0414 17:30:45.873434  463317 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0414 17:30:45.953499  463317 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0414 17:30:50.445010  463317 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0414 17:30:50.445139  463317 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-890526 host does not exist
	  To start a cluster, run: "minikube start -p download-only-890526"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-890526
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (5.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-935890 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-935890 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.20527863s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (5.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0414 17:30:59.511522  463312 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0414 17:30:59.511559  463312 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-935890
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-935890: exit status 85 (87.479799ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-890526 | jenkins | v1.35.0 | 14 Apr 25 17:30 UTC |                     |
	|         | -p download-only-890526        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Apr 25 17:30 UTC | 14 Apr 25 17:30 UTC |
	| delete  | -p download-only-890526        | download-only-890526 | jenkins | v1.35.0 | 14 Apr 25 17:30 UTC | 14 Apr 25 17:30 UTC |
	| start   | -o=json --download-only        | download-only-935890 | jenkins | v1.35.0 | 14 Apr 25 17:30 UTC |                     |
	|         | -p download-only-935890        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 17:30:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 17:30:54.351896  463517 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:30:54.352032  463517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:30:54.352043  463517 out.go:358] Setting ErrFile to fd 2...
	I0414 17:30:54.352049  463517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:30:54.352406  463517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
	I0414 17:30:54.352873  463517 out.go:352] Setting JSON to true
	I0414 17:30:54.353747  463517 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8001,"bootTime":1744643854,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0414 17:30:54.353836  463517 start.go:139] virtualization:  
	I0414 17:30:54.357193  463517 out.go:97] [download-only-935890] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0414 17:30:54.357465  463517 notify.go:220] Checking for updates...
	I0414 17:30:54.361017  463517 out.go:169] MINIKUBE_LOCATION=20201
	I0414 17:30:54.364033  463517 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:30:54.366916  463517 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	I0414 17:30:54.369872  463517 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	I0414 17:30:54.372880  463517 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0414 17:30:54.378695  463517 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 17:30:54.379037  463517 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:30:54.405696  463517 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 17:30:54.405797  463517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 17:30:54.460243  463517 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:44 SystemTime:2025-04-14 17:30:54.451501892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 17:30:54.460388  463517 docker.go:318] overlay module found
	I0414 17:30:54.463365  463517 out.go:97] Using the docker driver based on user configuration
	I0414 17:30:54.463407  463517 start.go:297] selected driver: docker
	I0414 17:30:54.463415  463517 start.go:901] validating driver "docker" against <nil>
	I0414 17:30:54.463531  463517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 17:30:54.519151  463517 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:44 SystemTime:2025-04-14 17:30:54.510545817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 17:30:54.519307  463517 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 17:30:54.519591  463517 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0414 17:30:54.519745  463517 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 17:30:54.522751  463517 out.go:169] Using Docker driver with root privileges
	I0414 17:30:54.525502  463517 cni.go:84] Creating CNI manager for ""
	I0414 17:30:54.525566  463517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0414 17:30:54.525581  463517 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 17:30:54.525658  463517 start.go:340] cluster config:
	{Name:download-only-935890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-935890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:30:54.528631  463517 out.go:97] Starting "download-only-935890" primary control-plane node in "download-only-935890" cluster
	I0414 17:30:54.528653  463517 cache.go:121] Beginning downloading kic base image for docker with crio
	I0414 17:30:54.531471  463517 out.go:97] Pulling base image v0.0.46-1744107393-20604 ...
	I0414 17:30:54.531494  463517 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:30:54.531609  463517 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon
	I0414 17:30:54.547373  463517 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a to local cache
	I0414 17:30:54.547507  463517 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local cache directory
	I0414 17:30:54.547526  463517 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local cache directory, skipping pull
	I0414 17:30:54.547531  463517 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a exists in cache, skipping pull
	I0414 17:30:54.547538  463517 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a as a tarball
	I0414 17:30:54.599705  463517 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
	I0414 17:30:54.599736  463517 cache.go:56] Caching tarball of preloaded images
	I0414 17:30:54.599900  463517 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:30:54.603015  463517 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0414 17:30:54.603041  463517 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 ...
	I0414 17:30:54.698134  463517 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:40a74f4030ed7e841ef78821ba704831 -> /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
	I0414 17:30:57.922444  463517 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 ...
	I0414 17:30:57.922542  463517 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20201-457936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 ...
	I0414 17:30:58.847918  463517 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 17:30:58.848370  463517 profile.go:143] Saving config to /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/download-only-935890/config.json ...
	I0414 17:30:58.848417  463517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/download-only-935890/config.json: {Name:mk28fc05a48011712f16557c119b9d3d28496117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:30:58.848610  463517 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:30:58.848777  463517 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20201-457936/.minikube/cache/linux/arm64/v1.32.2/kubectl
	
	
	* The control-plane node download-only-935890 host does not exist
	  To start a cluster, run: "minikube start -p download-only-935890"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-935890
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0414 17:31:00.900421  463312 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-047608 --alsologtostderr --binary-mirror http://127.0.0.1:42079 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-047608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-047608
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-225375
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-225375: exit status 85 (82.337915ms)

                                                
                                                
-- stdout --
	* Profile "addons-225375" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-225375"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-225375
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-225375: exit status 85 (78.991007ms)

                                                
                                                
-- stdout --
	* Profile "addons-225375" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-225375"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (151.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-225375 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-225375 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m31.652961874s)
--- PASS: TestAddons/Setup (151.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-225375 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-225375 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.99s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-225375 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-225375 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee90bf99-0fc4-4c95-8136-586428e90aaf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ee90bf99-0fc4-4c95-8136-586428e90aaf] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004436115s
addons_test.go:633: (dbg) Run:  kubectl --context addons-225375 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-225375 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-225375 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-225375 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.99s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 13.089975ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-n96sg" [d9b0d607-5c52-4819-a74e-00b7577de414] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003941742s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sczjl" [52f0da01-0637-45eb-b3a3-30867a12edd4] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003081302s
addons_test.go:331: (dbg) Run:  kubectl --context addons-225375 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-225375 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-225375 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.372672485s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 ip
2025/04/14 17:34:12 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pv858" [1b82bde5-a57c-4fdf-bd1d-35edc2cc1425] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004352213s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-225375 addons disable inspektor-gadget --alsologtostderr -v=1: (6.0309755s)
--- PASS: TestAddons/parallel/InspektorGadget (12.04s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 9.795497ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-cckq5" [3c6d4f27-a10c-41ac-89fe-ca8d03b92b77] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003065513s
addons_test.go:402: (dbg) Run:  kubectl --context addons-225375 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0414 17:34:12.292615  463312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0414 17:34:12.297389  463312 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0414 17:34:12.297421  463312 kapi.go:107] duration metric: took 7.67206ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.683703ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-225375 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-225375 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [89fd5b68-47f8-4143-92e5-d0ee1cc858d9] Pending
helpers_test.go:344: "task-pv-pod" [89fd5b68-47f8-4143-92e5-d0ee1cc858d9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [89fd5b68-47f8-4143-92e5-d0ee1cc858d9] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.006922503s
addons_test.go:511: (dbg) Run:  kubectl --context addons-225375 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-225375 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-225375 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-225375 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-225375 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-225375 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-225375 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a91047af-ec34-4787-9173-a1b160535c21] Pending
helpers_test.go:344: "task-pv-pod-restore" [a91047af-ec34-4787-9173-a1b160535c21] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a91047af-ec34-4787-9173-a1b160535c21] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00364396s
addons_test.go:553: (dbg) Run:  kubectl --context addons-225375 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-225375 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-225375 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-225375 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.942357954s)
--- PASS: TestAddons/parallel/CSI (44.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-225375 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-qz7lt" [dbb39716-cafb-4dfc-972b-f712f3344f4b] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-qz7lt" [dbb39716-cafb-4dfc-972b-f712f3344f4b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-qz7lt" [dbb39716-cafb-4dfc-972b-f712f3344f4b] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004064936s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-225375 addons disable headlamp --alsologtostderr -v=1: (5.970304007s)
--- PASS: TestAddons/parallel/Headlamp (17.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-r7q77" [5a498477-9df4-40e8-94ca-f734be68b25d] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003976523s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-225375 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-225375 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-225375 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3a90d4e0-5ba2-4996-a032-cfd4f8e7f57e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3a90d4e0-5ba2-4996-a032-cfd4f8e7f57e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3a90d4e0-5ba2-4996-a032-cfd4f8e7f57e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003406005s
addons_test.go:906: (dbg) Run:  kubectl --context addons-225375 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 ssh "cat /opt/local-path-provisioner/pvc-85319c25-c053-4685-8a1f-c7a523e159f2_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-225375 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-225375 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-225375 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.347279241s)
--- PASS: TestAddons/parallel/LocalPath (51.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-66f28" [38c4e820-97af-4507-8c70-478ee09911c6] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004345121s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-w6d9p" [90f76b3f-d4cd-40da-a556-ec7234753881] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002801203s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-225375 addons disable yakd --alsologtostderr -v=1: (5.735969378s)
--- PASS: TestAddons/parallel/Yakd (11.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.23s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-225375
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-225375: (11.936750819s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-225375
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-225375
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-225375
--- PASS: TestAddons/StoppedEnableDisable (12.23s)

                                                
                                    
x
+
TestCertOptions (33.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-269705 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0414 18:18:16.253569  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:18:34.152115  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-269705 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.943886477s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-269705 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-269705 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-269705 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-269705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-269705
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-269705: (2.016498545s)
--- PASS: TestCertOptions (33.61s)

                                                
                                    
x
+
TestCertExpiration (241.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-818262 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-818262 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.703084505s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-818262 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-818262 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (20.44672977s)
helpers_test.go:175: Cleaning up "cert-expiration-818262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-818262
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-818262: (2.487708764s)
--- PASS: TestCertExpiration (241.64s)

                                                
                                    
x
+
TestForceSystemdFlag (34.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-376290 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-376290 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.734815786s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-376290 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-376290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-376290
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-376290: (2.36429494s)
--- PASS: TestForceSystemdFlag (34.42s)

                                                
                                    
x
+
TestForceSystemdEnv (33.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-141594 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-141594 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.9288345s)
helpers_test.go:175: Cleaning up "force-systemd-env-141594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-141594
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-141594: (2.40114609s)
--- PASS: TestForceSystemdEnv (33.33s)

                                                
                                    
x
+
TestErrorSpam/setup (30.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-872720 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-872720 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-872720 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-872720 --driver=docker  --container-runtime=crio: (30.304626988s)
--- PASS: TestErrorSpam/setup (30.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 stop: (1.243898312s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872720 --log_dir /tmp/nospam-872720 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20201-457936/.minikube/files/etc/test/nested/copy/463312/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666858 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0414 17:38:34.155882  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:34.163041  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:34.174655  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:34.195990  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:34.237677  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:34.318948  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:34.480394  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:34.802006  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:35.443325  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:36.724893  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:39.287683  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:38:44.409008  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-666858 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (48.560517167s)
--- PASS: TestFunctional/serial/StartWithProxy (48.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0414 17:38:51.550727  463312 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666858 --alsologtostderr -v=8
E0414 17:38:54.650757  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:39:15.132659  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-666858 --alsologtostderr -v=8: (27.754812311s)
functional_test.go:680: soft start took 27.758442578s for "functional-666858" cluster.
I0414 17:39:19.305871  463312 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (27.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-666858 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-666858 cache add registry.k8s.io/pause:3.1: (1.508077438s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-666858 cache add registry.k8s.io/pause:3.3: (1.529543432s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-666858 cache add registry.k8s.io/pause:latest: (1.459265149s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-666858 /tmp/TestFunctionalserialCacheCmdcacheadd_local792640223/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 cache add minikube-local-cache-test:functional-666858
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 cache delete minikube-local-cache-test:functional-666858
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-666858
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666858 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.776411ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-arm64 -p functional-666858 cache reload: (1.065507208s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 kubectl -- --context functional-666858 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-666858 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666858 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0414 17:39:56.095262  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-666858 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.513267387s)
functional_test.go:778: restart took 35.513371346s for "functional-666858" cluster.
I0414 17:40:03.716626  463312 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (35.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-666858 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-666858 logs: (1.77415018s)
--- PASS: TestFunctional/serial/LogsCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 logs --file /tmp/TestFunctionalserialLogsFileCmd792217275/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-666858 logs --file /tmp/TestFunctionalserialLogsFileCmd792217275/001/logs.txt: (1.780754708s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-666858 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-666858
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-666858: exit status 115 (449.921068ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30947 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-666858 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666858 config get cpus: exit status 14 (98.897184ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666858 config get cpus: exit status 14 (64.871344ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-666858 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-666858 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 489866: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666858 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-666858 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (195.872354ms)

                                                
                                                
-- stdout --
	* [functional-666858] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20201
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:40:44.099753  489606 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:40:44.099961  489606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:40:44.099980  489606 out.go:358] Setting ErrFile to fd 2...
	I0414 17:40:44.100001  489606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:40:44.100351  489606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
	I0414 17:40:44.100901  489606 out.go:352] Setting JSON to false
	I0414 17:40:44.102718  489606 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8590,"bootTime":1744643854,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0414 17:40:44.102806  489606 start.go:139] virtualization:  
	I0414 17:40:44.106216  489606 out.go:177] * [functional-666858] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0414 17:40:44.109231  489606 out.go:177]   - MINIKUBE_LOCATION=20201
	I0414 17:40:44.109273  489606 notify.go:220] Checking for updates...
	I0414 17:40:44.117693  489606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:40:44.120694  489606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	I0414 17:40:44.123641  489606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	I0414 17:40:44.126465  489606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0414 17:40:44.129355  489606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:40:44.132736  489606 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:40:44.133423  489606 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:40:44.158960  489606 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 17:40:44.159104  489606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 17:40:44.223384  489606 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-14 17:40:44.214404303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 17:40:44.223493  489606 docker.go:318] overlay module found
	I0414 17:40:44.226657  489606 out.go:177] * Using the docker driver based on existing profile
	I0414 17:40:44.229458  489606 start.go:297] selected driver: docker
	I0414 17:40:44.229472  489606 start.go:901] validating driver "docker" against &{Name:functional-666858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-666858 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:40:44.229566  489606 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:40:44.233185  489606 out.go:201] 
	W0414 17:40:44.236034  489606 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0414 17:40:44.238859  489606 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666858 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666858 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-666858 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (216.641309ms)

                                                
                                                
-- stdout --
	* [functional-666858] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20201
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:40:43.900753  489560 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:40:43.900995  489560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:40:43.901004  489560 out.go:358] Setting ErrFile to fd 2...
	I0414 17:40:43.901009  489560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:40:43.902125  489560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
	I0414 17:40:43.902608  489560 out.go:352] Setting JSON to false
	I0414 17:40:43.903600  489560 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8590,"bootTime":1744643854,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0414 17:40:43.903673  489560 start.go:139] virtualization:  
	I0414 17:40:43.907477  489560 out.go:177] * [functional-666858] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0414 17:40:43.910278  489560 out.go:177]   - MINIKUBE_LOCATION=20201
	I0414 17:40:43.910401  489560 notify.go:220] Checking for updates...
	I0414 17:40:43.916136  489560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:40:43.918999  489560 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	I0414 17:40:43.921864  489560 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	I0414 17:40:43.924668  489560 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0414 17:40:43.927517  489560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:40:43.930896  489560 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:40:43.931505  489560 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:40:43.954052  489560 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 17:40:43.954169  489560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 17:40:44.027005  489560 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-14 17:40:44.016587023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 17:40:44.027121  489560 docker.go:318] overlay module found
	I0414 17:40:44.030160  489560 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0414 17:40:44.032896  489560 start.go:297] selected driver: docker
	I0414 17:40:44.032936  489560 start.go:901] validating driver "docker" against &{Name:functional-666858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-666858 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:40:44.033061  489560 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:40:44.036651  489560 out.go:201] 
	W0414 17:40:44.039614  489560 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0414 17:40:44.042462  489560 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-666858 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-666858 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-qxcll" [89a61dac-f4c0-4f35-8bcc-8be7df00bad7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-qxcll" [89a61dac-f4c0-4f35-8bcc-8be7df00bad7] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004094677s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:31595
functional_test.go:1692: http://192.168.49.2:31595: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-qxcll

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31595
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [11b88854-8caf-4c15-a3c8-72307427b23d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004090767s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-666858 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-666858 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-666858 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-666858 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e3a574de-f517-4cbb-af2f-39bea0fe1db5] Pending
helpers_test.go:344: "sp-pod" [e3a574de-f517-4cbb-af2f-39bea0fe1db5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e3a574de-f517-4cbb-af2f-39bea0fe1db5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00380631s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-666858 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-666858 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-666858 delete -f testdata/storage-provisioner/pod.yaml: (1.219486331s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-666858 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [287a9779-6827-4472-b240-ebacdb842d40] Pending
helpers_test.go:344: "sp-pod" [287a9779-6827-4472-b240-ebacdb842d40] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002917121s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-666858 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh -n functional-666858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 cp functional-666858:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4240996008/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh -n functional-666858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh -n functional-666858 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/463312/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo cat /etc/test/nested/copy/463312/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/463312.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo cat /etc/ssl/certs/463312.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/463312.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo cat /usr/share/ca-certificates/463312.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/4633122.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo cat /etc/ssl/certs/4633122.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/4633122.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo cat /usr/share/ca-certificates/4633122.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-666858 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666858 ssh "sudo systemctl is-active docker": exit status 1 (502.201987ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666858 ssh "sudo systemctl is-active containerd": exit status 1 (410.611235ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-666858 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-666858 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-666858 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 487468: os: process already finished
helpers_test.go:502: unable to terminate pid 487272: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-666858 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-666858 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-666858 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fc715dd7-735f-44de-b4ec-091b6c497bb3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fc715dd7-735f-44de-b4ec-091b6c497bb3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003598935s
I0414 17:40:22.673098  463312 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-666858 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.34.218 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-666858 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-666858 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-666858 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-fqwln" [e31f1c1b-8d94-4d00-9f81-af84a31699d0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-fqwln" [e31f1c1b-8d94-4d00-9f81-af84a31699d0] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005594724s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "346.904844ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "59.037004ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "413.248072ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "60.951316ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdany-port1263902232/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744652440325834144" to /tmp/TestFunctionalparallelMountCmdany-port1263902232/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744652440325834144" to /tmp/TestFunctionalparallelMountCmdany-port1263902232/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744652440325834144" to /tmp/TestFunctionalparallelMountCmdany-port1263902232/001/test-1744652440325834144
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (435.704111ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 17:40:40.762581  463312 retry.go:31] will retry after 683.503859ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 14 17:40 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 14 17:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 14 17:40 test-1744652440325834144
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh cat /mount-9p/test-1744652440325834144
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-666858 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [53dc19bc-4fa3-4fb6-8847-2a36e9b36e77] Pending
helpers_test.go:344: "busybox-mount" [53dc19bc-4fa3-4fb6-8847-2a36e9b36e77] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [53dc19bc-4fa3-4fb6-8847-2a36e9b36e77] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [53dc19bc-4fa3-4fb6-8847-2a36e9b36e77] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003601929s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-666858 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdany-port1263902232/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 service list -o json
functional_test.go:1511: Took "552.265711ms" to run "out/minikube-linux-arm64 -p functional-666858 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:30565
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:30565
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdspecific-port198056426/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (583.748354ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 17:40:50.702616  463312 retry.go:31] will retry after 504.470219ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdspecific-port198056426/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "sudo umount -f /mount-9p"
2025/04/14 17:40:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666858 ssh "sudo umount -f /mount-9p": exit status 1 (395.638337ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-666858 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdspecific-port198056426/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup717727285/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup717727285/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup717727285/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T" /mount1: exit status 1 (989.742643ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 17:40:53.522040  463312 retry.go:31] will retry after 458.233263ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-666858 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup717727285/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup717727285/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup717727285/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-666858 version -o=json --components: (1.253406481s)
--- PASS: TestFunctional/parallel/Version/components (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-666858 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-666858
localhost/kicbase/echo-server:functional-666858
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-666858 image ls --format short --alsologtostderr:
I0414 17:41:01.796084  492476 out.go:345] Setting OutFile to fd 1 ...
I0414 17:41:01.796479  492476 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 17:41:01.796517  492476 out.go:358] Setting ErrFile to fd 2...
I0414 17:41:01.796539  492476 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 17:41:01.796850  492476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
I0414 17:41:01.798594  492476 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 17:41:01.800978  492476 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 17:41:01.803366  492476 cli_runner.go:164] Run: docker container inspect functional-666858 --format={{.State.Status}}
I0414 17:41:01.831998  492476 ssh_runner.go:195] Run: systemctl --version
I0414 17:41:01.832050  492476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666858
I0414 17:41:01.856211  492476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33177 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/functional-666858/id_rsa Username:docker}
I0414 17:41:01.959299  492476 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-666858 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | e1181ee320546 | 99MB   |
| docker.io/library/nginx                 | alpine             | cedb667e1a7b4 | 50.8MB |
| docker.io/library/nginx                 | latest             | 1530d85bdba64 | 201MB  |
| registry.k8s.io/kube-scheduler          | v1.32.2            | 82dfa03f692fb | 69MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 6417e1437b6d9 | 95MB   |
| docker.io/kindest/kindnetd              | v20250214-acbabc1a | ee75e27fff91c | 99MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.16-0           | 7fc9d4aa817aa | 143MB  |
| registry.k8s.io/kube-controller-manager | v1.32.2            | 3c9285acfd2ff | 88.2MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| localhost/kicbase/echo-server           | functional-666858  | ce2d2cda2d858 | 4.79MB |
| localhost/minikube-local-cache-test     | functional-666858  | cd1688b0efcdf | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.32.2            | e5aac5df76d9b | 98.3MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-666858 image ls --format table --alsologtostderr:
I0414 17:41:02.432880  492629 out.go:345] Setting OutFile to fd 1 ...
I0414 17:41:02.433350  492629 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 17:41:02.433383  492629 out.go:358] Setting ErrFile to fd 2...
I0414 17:41:02.433405  492629 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 17:41:02.433705  492629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
I0414 17:41:02.434425  492629 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 17:41:02.434602  492629 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 17:41:02.435120  492629 cli_runner.go:164] Run: docker container inspect functional-666858 --format={{.State.Status}}
I0414 17:41:02.460316  492629 ssh_runner.go:195] Run: systemctl --version
I0414 17:41:02.460376  492629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666858
I0414 17:41:02.487422  492629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33177 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/functional-666858/id_rsa Username:docker}
I0414 17:41:02.578970  492629 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-666858 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"143226622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c59
01d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d0
3e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-666858"],"size":"4788229"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c
1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f","repoDigests":["docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955","docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"99018290"},{"id":"cd1688b0efcdf2a5f6ba23cac6981ab4ea958fab870062a6b6575b6c675e5010","repoDigests":["localhost/minikube-local-cache-test@sha256:cd43e71dc72cc4256a19063996a2521709c40ee23a6360f5fc471e3b35ca86b9"],"repoTags":["localhost/minikube-local-cache-test:functional-666858"],"size":"3330"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6
268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:a532964581fdb02b9d692589bb93db7d4b8a7bd8c120d8fb70803da0e3c83647"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"68973894"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591","docker.io/library/nginx@sha256:56568860b56c0bc8099fe1b2d84f43a18939e217e6c619126214c0f71bc27626"],"repoTags":["docker.io/lib
rary/nginx:alpine"],"size":"50780648"},{"id":"1530d85bdba64a05d90f5ed988c8f77ccf2cc8582027f3ad8a7c5b39a5c5e22b","repoDigests":["docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab","docker.io/library/nginx@sha256:846993cfd1ec2f814d7f3cfdc8df7aa67ecfe6ab233fd990c82d34eea47beb8e"],"repoTags":["docker.io/library/nginx:latest"],"size":"201448359"},{"id":"6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32","repoDigests":["registry.k8s.io/kube-apiserver@sha256:22cdd0e13fe99dc2e5a3476b92895d89d81285cbe73b592033fa05b68c6c19a3","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"94991840"},{"id":"3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90","registry.k8s.io/kube-controller-manager@sha256:737052e0a843
09cec4e9e3f1baaf80160273511c809893db40ab595e494a8777"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"88241478"},{"id":"e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062","repoDigests":["registry.k8s.io/kube-proxy@sha256:6b93583f4856ea0923c6fffd91c802a2362511378390acc6e539a419210ee23b","registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"98313623"},{"id":"e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6","repoDigests":["docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be","docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"99018802"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-666858 image ls --format json --alsologtostderr:
I0414 17:41:02.122415  492544 out.go:345] Setting OutFile to fd 1 ...
I0414 17:41:02.122567  492544 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 17:41:02.122573  492544 out.go:358] Setting ErrFile to fd 2...
I0414 17:41:02.122579  492544 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 17:41:02.123043  492544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
I0414 17:41:02.127449  492544 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 17:41:02.127622  492544 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 17:41:02.128114  492544 cli_runner.go:164] Run: docker container inspect functional-666858 --format={{.State.Status}}
I0414 17:41:02.192312  492544 ssh_runner.go:195] Run: systemctl --version
I0414 17:41:02.192376  492544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666858
I0414 17:41:02.212871  492544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33177 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/functional-666858/id_rsa Username:docker}
I0414 17:41:02.307082  492544 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-666858 image ls --format yaml --alsologtostderr:
- id: e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6
repoDigests:
- docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "99018802"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: cd1688b0efcdf2a5f6ba23cac6981ab4ea958fab870062a6b6575b6c675e5010
repoDigests:
- localhost/minikube-local-cache-test@sha256:cd43e71dc72cc4256a19063996a2521709c40ee23a6360f5fc471e3b35ca86b9
repoTags:
- localhost/minikube-local-cache-test:functional-666858
size: "3330"
- id: 6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:22cdd0e13fe99dc2e5a3476b92895d89d81285cbe73b592033fa05b68c6c19a3
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "94991840"
- id: 82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:a532964581fdb02b9d692589bb93db7d4b8a7bd8c120d8fb70803da0e3c83647
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "68973894"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f
repoDigests:
- docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "99018290"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-666858
size: "4788229"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "143226622"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
- docker.io/library/nginx@sha256:56568860b56c0bc8099fe1b2d84f43a18939e217e6c619126214c0f71bc27626
repoTags:
- docker.io/library/nginx:alpine
size: "50780648"
- id: 1530d85bdba64a05d90f5ed988c8f77ccf2cc8582027f3ad8a7c5b39a5c5e22b
repoDigests:
- docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab
- docker.io/library/nginx@sha256:846993cfd1ec2f814d7f3cfdc8df7aa67ecfe6ab233fd990c82d34eea47beb8e
repoTags:
- docker.io/library/nginx:latest
size: "201448359"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
- registry.k8s.io/kube-controller-manager@sha256:737052e0a84309cec4e9e3f1baaf80160273511c809893db40ab595e494a8777
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "88241478"
- id: e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6b93583f4856ea0923c6fffd91c802a2362511378390acc6e539a419210ee23b
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "98313623"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-666858 image ls --format yaml --alsologtostderr:
I0414 17:41:01.777919  492475 out.go:345] Setting OutFile to fd 1 ...
I0414 17:41:01.778061  492475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 17:41:01.778068  492475 out.go:358] Setting ErrFile to fd 2...
I0414 17:41:01.778073  492475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 17:41:01.778407  492475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
I0414 17:41:01.779213  492475 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 17:41:01.779381  492475 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 17:41:01.779920  492475 cli_runner.go:164] Run: docker container inspect functional-666858 --format={{.State.Status}}
I0414 17:41:01.806512  492475 ssh_runner.go:195] Run: systemctl --version
I0414 17:41:01.806575  492475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666858
I0414 17:41:01.828326  492475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33177 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/functional-666858/id_rsa Username:docker}
I0414 17:41:01.923330  492475 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666858 ssh pgrep buildkitd: exit status 1 (361.846414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image build -t localhost/my-image:functional-666858 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-666858 image build -t localhost/my-image:functional-666858 testdata/build --alsologtostderr: (3.112621074s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-arm64 -p functional-666858 image build -t localhost/my-image:functional-666858 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5cf7ca18fa2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-666858
--> 4829fae2ec6
Successfully tagged localhost/my-image:functional-666858
4829fae2ec6a93a100f8a53bbfcef6e769b9802d8306181dcb0f9484a3ba7d4b
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-666858 image build -t localhost/my-image:functional-666858 testdata/build --alsologtostderr:
I0414 17:41:02.435657  492625 out.go:345] Setting OutFile to fd 1 ...
I0414 17:41:02.436525  492625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 17:41:02.436546  492625 out.go:358] Setting ErrFile to fd 2...
I0414 17:41:02.436553  492625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 17:41:02.436851  492625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
I0414 17:41:02.437657  492625 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 17:41:02.439862  492625 config.go:182] Loaded profile config "functional-666858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 17:41:02.440399  492625 cli_runner.go:164] Run: docker container inspect functional-666858 --format={{.State.Status}}
I0414 17:41:02.466773  492625 ssh_runner.go:195] Run: systemctl --version
I0414 17:41:02.466832  492625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666858
I0414 17:41:02.495003  492625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33177 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/functional-666858/id_rsa Username:docker}
I0414 17:41:02.587660  492625 build_images.go:161] Building image from path: /tmp/build.3495139320.tar
I0414 17:41:02.587733  492625 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0414 17:41:02.601475  492625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3495139320.tar
I0414 17:41:02.605345  492625 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3495139320.tar: stat -c "%s %y" /var/lib/minikube/build/build.3495139320.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3495139320.tar': No such file or directory
I0414 17:41:02.605435  492625 ssh_runner.go:362] scp /tmp/build.3495139320.tar --> /var/lib/minikube/build/build.3495139320.tar (3072 bytes)
I0414 17:41:02.647599  492625 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3495139320
I0414 17:41:02.658064  492625 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3495139320 -xf /var/lib/minikube/build/build.3495139320.tar
I0414 17:41:02.668182  492625 crio.go:315] Building image: /var/lib/minikube/build/build.3495139320
I0414 17:41:02.668286  492625 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-666858 /var/lib/minikube/build/build.3495139320 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0414 17:41:05.427495  492625 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-666858 /var/lib/minikube/build/build.3495139320 --cgroup-manager=cgroupfs: (2.759181537s)
I0414 17:41:05.427586  492625 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3495139320
I0414 17:41:05.436243  492625 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3495139320.tar
I0414 17:41:05.445465  492625 build_images.go:217] Built localhost/my-image:functional-666858 from /tmp/build.3495139320.tar
I0414 17:41:05.445494  492625 build_images.go:133] succeeded building to: functional-666858
I0414 17:41:05.445500  492625 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-666858
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image load --daemon kicbase/echo-server:functional-666858 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-arm64 -p functional-666858 image load --daemon kicbase/echo-server:functional-666858 --alsologtostderr: (1.365420722s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image load --daemon kicbase/echo-server:functional-666858 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-666858
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image load --daemon kicbase/echo-server:functional-666858 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image save kicbase/echo-server:functional-666858 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image rm kicbase/echo-server:functional-666858 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-666858
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-666858 image save --daemon kicbase/echo-server:functional-666858 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-666858
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-666858
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-666858
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-666858
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (180.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-301425 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0414 17:41:18.017520  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:34.152313  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:01.859428  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-301425 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m59.189093429s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (180.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-301425 -- rollout status deployment/busybox: (6.169808253s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-ktz5p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-rn7ts -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-sqddf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-ktz5p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-rn7ts -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-sqddf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-ktz5p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-rn7ts -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-sqddf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-ktz5p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-ktz5p -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-rn7ts -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-rn7ts -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-sqddf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301425 -- exec busybox-58667487b6-sqddf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (38.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-301425 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-301425 -v=7 --alsologtostderr: (37.350861435s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (38.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-301425 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp testdata/cp-test.txt ha-301425:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1439460852/001/cp-test_ha-301425.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425:/home/docker/cp-test.txt ha-301425-m02:/home/docker/cp-test_ha-301425_ha-301425-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m02 "sudo cat /home/docker/cp-test_ha-301425_ha-301425-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425:/home/docker/cp-test.txt ha-301425-m03:/home/docker/cp-test_ha-301425_ha-301425-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m03 "sudo cat /home/docker/cp-test_ha-301425_ha-301425-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425:/home/docker/cp-test.txt ha-301425-m04:/home/docker/cp-test_ha-301425_ha-301425-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m04 "sudo cat /home/docker/cp-test_ha-301425_ha-301425-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp testdata/cp-test.txt ha-301425-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1439460852/001/cp-test_ha-301425-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m02:/home/docker/cp-test.txt ha-301425:/home/docker/cp-test_ha-301425-m02_ha-301425.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425 "sudo cat /home/docker/cp-test_ha-301425-m02_ha-301425.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m02:/home/docker/cp-test.txt ha-301425-m03:/home/docker/cp-test_ha-301425-m02_ha-301425-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m03 "sudo cat /home/docker/cp-test_ha-301425-m02_ha-301425-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m02:/home/docker/cp-test.txt ha-301425-m04:/home/docker/cp-test_ha-301425-m02_ha-301425-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m04 "sudo cat /home/docker/cp-test_ha-301425-m02_ha-301425-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp testdata/cp-test.txt ha-301425-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1439460852/001/cp-test_ha-301425-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m03:/home/docker/cp-test.txt ha-301425:/home/docker/cp-test_ha-301425-m03_ha-301425.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425 "sudo cat /home/docker/cp-test_ha-301425-m03_ha-301425.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m03:/home/docker/cp-test.txt ha-301425-m02:/home/docker/cp-test_ha-301425-m03_ha-301425-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m02 "sudo cat /home/docker/cp-test_ha-301425-m03_ha-301425-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m03:/home/docker/cp-test.txt ha-301425-m04:/home/docker/cp-test_ha-301425-m03_ha-301425-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m04 "sudo cat /home/docker/cp-test_ha-301425-m03_ha-301425-m04.txt"
E0414 17:45:13.187651  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:13.193917  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:13.205266  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:13.226821  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:13.268387  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp testdata/cp-test.txt ha-301425-m04:/home/docker/cp-test.txt
E0414 17:45:13.350596  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:13.511902  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m04 "sudo cat /home/docker/cp-test.txt"
E0414 17:45:13.833451  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1439460852/001/cp-test_ha-301425-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m04 "sudo cat /home/docker/cp-test.txt"
E0414 17:45:14.475311  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m04:/home/docker/cp-test.txt ha-301425:/home/docker/cp-test_ha-301425-m04_ha-301425.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425 "sudo cat /home/docker/cp-test_ha-301425-m04_ha-301425.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m04:/home/docker/cp-test.txt ha-301425-m02:/home/docker/cp-test_ha-301425-m04_ha-301425-m02.txt
E0414 17:45:15.757027  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m02 "sudo cat /home/docker/cp-test_ha-301425-m04_ha-301425-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 cp ha-301425-m04:/home/docker/cp-test.txt ha-301425-m03:/home/docker/cp-test_ha-301425-m04_ha-301425-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 ssh -n ha-301425-m03 "sudo cat /home/docker/cp-test_ha-301425-m04_ha-301425-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 node stop m02 -v=7 --alsologtostderr
E0414 17:45:18.318444  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:23.440185  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-301425 node stop m02 -v=7 --alsologtostderr: (11.999908097s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr: exit status 7 (753.334174ms)

                                                
                                                
-- stdout --
	ha-301425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-301425-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-301425-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-301425-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:45:29.808041  508607 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:45:29.808237  508607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:45:29.808266  508607 out.go:358] Setting ErrFile to fd 2...
	I0414 17:45:29.808285  508607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:45:29.808568  508607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
	I0414 17:45:29.808783  508607 out.go:352] Setting JSON to false
	I0414 17:45:29.808872  508607 mustload.go:65] Loading cluster: ha-301425
	I0414 17:45:29.808941  508607 notify.go:220] Checking for updates...
	I0414 17:45:29.809928  508607 config.go:182] Loaded profile config "ha-301425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:45:29.809982  508607 status.go:174] checking status of ha-301425 ...
	I0414 17:45:29.810634  508607 cli_runner.go:164] Run: docker container inspect ha-301425 --format={{.State.Status}}
	I0414 17:45:29.830044  508607 status.go:371] ha-301425 host status = "Running" (err=<nil>)
	I0414 17:45:29.830066  508607 host.go:66] Checking if "ha-301425" exists ...
	I0414 17:45:29.830531  508607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-301425
	I0414 17:45:29.862755  508607 host.go:66] Checking if "ha-301425" exists ...
	I0414 17:45:29.863136  508607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 17:45:29.863216  508607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-301425
	I0414 17:45:29.882113  508607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/ha-301425/id_rsa Username:docker}
	I0414 17:45:29.971988  508607 ssh_runner.go:195] Run: systemctl --version
	I0414 17:45:29.976370  508607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:45:29.988692  508607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 17:45:30.068870  508607 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-04-14 17:45:30.058352941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 17:45:30.069446  508607 kubeconfig.go:125] found "ha-301425" server: "https://192.168.49.254:8443"
	I0414 17:45:30.069491  508607 api_server.go:166] Checking apiserver status ...
	I0414 17:45:30.069538  508607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:30.083855  508607 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	I0414 17:45:30.094729  508607 api_server.go:182] apiserver freezer: "4:freezer:/docker/6e0675f2441d4ed365b589d3761d0cb86be17097b7020bd30e79206b8684fa77/crio/crio-f56043988d369711bafb7c4be98565301aee12d86cb4ca61a5d80ca155974876"
	I0414 17:45:30.094810  508607 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6e0675f2441d4ed365b589d3761d0cb86be17097b7020bd30e79206b8684fa77/crio/crio-f56043988d369711bafb7c4be98565301aee12d86cb4ca61a5d80ca155974876/freezer.state
	I0414 17:45:30.104558  508607 api_server.go:204] freezer state: "THAWED"
	I0414 17:45:30.104590  508607 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0414 17:45:30.112832  508607 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0414 17:45:30.112861  508607 status.go:463] ha-301425 apiserver status = Running (err=<nil>)
	I0414 17:45:30.112874  508607 status.go:176] ha-301425 status: &{Name:ha-301425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 17:45:30.112900  508607 status.go:174] checking status of ha-301425-m02 ...
	I0414 17:45:30.113241  508607 cli_runner.go:164] Run: docker container inspect ha-301425-m02 --format={{.State.Status}}
	I0414 17:45:30.132209  508607 status.go:371] ha-301425-m02 host status = "Stopped" (err=<nil>)
	I0414 17:45:30.132243  508607 status.go:384] host is not running, skipping remaining checks
	I0414 17:45:30.132253  508607 status.go:176] ha-301425-m02 status: &{Name:ha-301425-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 17:45:30.132276  508607 status.go:174] checking status of ha-301425-m03 ...
	I0414 17:45:30.132708  508607 cli_runner.go:164] Run: docker container inspect ha-301425-m03 --format={{.State.Status}}
	I0414 17:45:30.157471  508607 status.go:371] ha-301425-m03 host status = "Running" (err=<nil>)
	I0414 17:45:30.157498  508607 host.go:66] Checking if "ha-301425-m03" exists ...
	I0414 17:45:30.158293  508607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-301425-m03
	I0414 17:45:30.179648  508607 host.go:66] Checking if "ha-301425-m03" exists ...
	I0414 17:45:30.180004  508607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 17:45:30.180056  508607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-301425-m03
	I0414 17:45:30.199523  508607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/ha-301425-m03/id_rsa Username:docker}
	I0414 17:45:30.287827  508607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:45:30.299974  508607 kubeconfig.go:125] found "ha-301425" server: "https://192.168.49.254:8443"
	I0414 17:45:30.300003  508607 api_server.go:166] Checking apiserver status ...
	I0414 17:45:30.300048  508607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:30.312512  508607 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	I0414 17:45:30.321859  508607 api_server.go:182] apiserver freezer: "4:freezer:/docker/034c6f0a7f8b5d23efe9918de406d467755bda45908144feae5db0696afe0d09/crio/crio-e2c0be9da08a41ef37e303b8ed0760c0992a559590d3cb95f66e68eef4e4a4bf"
	I0414 17:45:30.321927  508607 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/034c6f0a7f8b5d23efe9918de406d467755bda45908144feae5db0696afe0d09/crio/crio-e2c0be9da08a41ef37e303b8ed0760c0992a559590d3cb95f66e68eef4e4a4bf/freezer.state
	I0414 17:45:30.330998  508607 api_server.go:204] freezer state: "THAWED"
	I0414 17:45:30.331030  508607 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0414 17:45:30.338780  508607 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0414 17:45:30.338863  508607 status.go:463] ha-301425-m03 apiserver status = Running (err=<nil>)
	I0414 17:45:30.338890  508607 status.go:176] ha-301425-m03 status: &{Name:ha-301425-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 17:45:30.338914  508607 status.go:174] checking status of ha-301425-m04 ...
	I0414 17:45:30.339234  508607 cli_runner.go:164] Run: docker container inspect ha-301425-m04 --format={{.State.Status}}
	I0414 17:45:30.358449  508607 status.go:371] ha-301425-m04 host status = "Running" (err=<nil>)
	I0414 17:45:30.358487  508607 host.go:66] Checking if "ha-301425-m04" exists ...
	I0414 17:45:30.358861  508607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-301425-m04
	I0414 17:45:30.379414  508607 host.go:66] Checking if "ha-301425-m04" exists ...
	I0414 17:45:30.379743  508607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 17:45:30.379789  508607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-301425-m04
	I0414 17:45:30.397919  508607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/ha-301425-m04/id_rsa Username:docker}
	I0414 17:45:30.496711  508607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:45:30.510919  508607 status.go:176] ha-301425-m04 status: &{Name:ha-301425-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 node start m02 -v=7 --alsologtostderr
E0414 17:45:33.682450  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:54.163973  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-301425 node start m02 -v=7 --alsologtostderr: (32.086730802s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr: (1.274432474s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.415241996s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (173.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-301425 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-301425 -v=7 --alsologtostderr
E0414 17:46:35.125974  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-301425 -v=7 --alsologtostderr: (37.266182582s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-301425 --wait=true -v=7 --alsologtostderr
E0414 17:47:57.048284  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:48:34.151996  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-301425 --wait=true -v=7 --alsologtostderr: (2m16.351388725s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-301425
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (173.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-301425 node delete m03 -v=7 --alsologtostderr: (11.681321596s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-301425 stop -v=7 --alsologtostderr: (35.679910859s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr: exit status 7 (113.777454ms)

                                                
                                                
-- stdout --
	ha-301425
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-301425-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-301425-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:49:49.054141  522873 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:49:49.054264  522873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:49:49.054276  522873 out.go:358] Setting ErrFile to fd 2...
	I0414 17:49:49.054281  522873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:49:49.054577  522873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
	I0414 17:49:49.054791  522873 out.go:352] Setting JSON to false
	I0414 17:49:49.054840  522873 mustload.go:65] Loading cluster: ha-301425
	I0414 17:49:49.054906  522873 notify.go:220] Checking for updates...
	I0414 17:49:49.056164  522873 config.go:182] Loaded profile config "ha-301425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:49:49.056197  522873 status.go:174] checking status of ha-301425 ...
	I0414 17:49:49.057092  522873 cli_runner.go:164] Run: docker container inspect ha-301425 --format={{.State.Status}}
	I0414 17:49:49.075637  522873 status.go:371] ha-301425 host status = "Stopped" (err=<nil>)
	I0414 17:49:49.075657  522873 status.go:384] host is not running, skipping remaining checks
	I0414 17:49:49.075671  522873 status.go:176] ha-301425 status: &{Name:ha-301425 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 17:49:49.075696  522873 status.go:174] checking status of ha-301425-m02 ...
	I0414 17:49:49.076007  522873 cli_runner.go:164] Run: docker container inspect ha-301425-m02 --format={{.State.Status}}
	I0414 17:49:49.096911  522873 status.go:371] ha-301425-m02 host status = "Stopped" (err=<nil>)
	I0414 17:49:49.096930  522873 status.go:384] host is not running, skipping remaining checks
	I0414 17:49:49.096937  522873 status.go:176] ha-301425-m02 status: &{Name:ha-301425-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 17:49:49.096957  522873 status.go:174] checking status of ha-301425-m04 ...
	I0414 17:49:49.097261  522873 cli_runner.go:164] Run: docker container inspect ha-301425-m04 --format={{.State.Status}}
	I0414 17:49:49.114300  522873 status.go:371] ha-301425-m04 host status = "Stopped" (err=<nil>)
	I0414 17:49:49.114339  522873 status.go:384] host is not running, skipping remaining checks
	I0414 17:49:49.114347  522873 status.go:176] ha-301425-m04 status: &{Name:ha-301425-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (97.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-301425 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0414 17:50:13.187555  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:50:40.890395  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-301425 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m36.546068946s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (97.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-301425 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-301425 --control-plane -v=7 --alsologtostderr: (1m12.110453672s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-301425 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-100482 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0414 17:53:34.151947  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-100482 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (47.821698134s)
--- PASS: TestJSONOutput/start/Command (47.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-100482 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-100482 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-100482 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-100482 --output=json --user=testUser: (5.889624937s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-312714 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-312714 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.285301ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d32505cd-1f68-47fd-985e-480f2c84422e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-312714] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"899c5971-80b3-4e88-865c-df5d8ac0162f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20201"}}
	{"specversion":"1.0","id":"8b9c87d9-d20b-4cd4-a574-683d51bfe780","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3fac7dea-fe95-42ff-b196-102231a47f9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig"}}
	{"specversion":"1.0","id":"94561a9f-7d71-4bba-8702-793ec08d801f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube"}}
	{"specversion":"1.0","id":"ae5e559d-cdcc-4eb8-8ce0-d76383b4a3d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7db5bece-8b4e-426a-911d-2ef150100df7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"21f63f23-d294-4523-a50e-f43fde1c03eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-312714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-312714
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-083229 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-083229 --network=: (38.422918391s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-083229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-083229
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-083229: (2.192424494s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.64s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-222486 --network=bridge
E0414 17:54:57.222530  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-222486 --network=bridge: (30.551338489s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-222486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-222486
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-222486: (2.133941538s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.74s)

                                                
                                    
x
+
TestKicExistingNetwork (34.74s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0414 17:55:02.494254  463312 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0414 17:55:02.510122  463312 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0414 17:55:02.510939  463312 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0414 17:55:02.510963  463312 cli_runner.go:164] Run: docker network inspect existing-network
W0414 17:55:02.527436  463312 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0414 17:55:02.527468  463312 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0414 17:55:02.527487  463312 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0414 17:55:02.527686  463312 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0414 17:55:02.545234  463312 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f85dfb368301 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:3b:69:ec:7d:0a} reservation:<nil>}
I0414 17:55:02.551142  463312 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0414 17:55:02.551499  463312 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c9d850}
I0414 17:55:02.552068  463312 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0414 17:55:02.552135  463312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0414 17:55:02.613436  463312 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-457119 --network=existing-network
E0414 17:55:13.187537  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-457119 --network=existing-network: (32.560411591s)
helpers_test.go:175: Cleaning up "existing-network-457119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-457119
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-457119: (2.022305231s)
I0414 17:55:37.213111  463312 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.74s)

                                                
                                    
x
+
TestKicCustomSubnet (31.04s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-054620 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-054620 --subnet=192.168.60.0/24: (28.834505639s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-054620 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-054620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-054620
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-054620: (2.171801458s)
--- PASS: TestKicCustomSubnet (31.04s)

                                                
                                    
x
+
TestKicStaticIP (33.25s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-238636 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-238636 --static-ip=192.168.200.200: (30.995956752s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-238636 ip
helpers_test.go:175: Cleaning up "static-ip-238636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-238636
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-238636: (2.096466296s)
--- PASS: TestKicStaticIP (33.25s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-956071 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-956071 --driver=docker  --container-runtime=crio: (32.215886754s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-959065 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-959065 --driver=docker  --container-runtime=crio: (33.720882372s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-956071
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-959065
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-959065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-959065
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-959065: (1.983514312s)
helpers_test.go:175: Cleaning up "first-956071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-956071
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-956071: (2.327439728s)
--- PASS: TestMinikubeProfile (71.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-632214 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-632214 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.557458529s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-632214 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-634028 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-634028 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.211617347s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-634028 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-632214 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-632214 --alsologtostderr -v=5: (1.619320866s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-634028 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-634028
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-634028: (1.192842425s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.51s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-634028
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-634028: (6.514129005s)
--- PASS: TestMountStart/serial/RestartStopped (7.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-634028 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-524008 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0414 17:58:34.151922  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-524008 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m21.060363066s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-524008 -- rollout status deployment/busybox: (4.637126615s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- exec busybox-58667487b6-blzkg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- exec busybox-58667487b6-p9bt2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- exec busybox-58667487b6-blzkg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- exec busybox-58667487b6-p9bt2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- exec busybox-58667487b6-blzkg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- exec busybox-58667487b6-p9bt2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- exec busybox-58667487b6-blzkg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- exec busybox-58667487b6-blzkg -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- exec busybox-58667487b6-p9bt2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524008 -- exec busybox-58667487b6-p9bt2 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-524008 -v 3 --alsologtostderr
E0414 18:00:13.187315  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-524008 -v 3 --alsologtostderr: (31.143578495s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-524008 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp testdata/cp-test.txt multinode-524008:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp multinode-524008:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile961062981/001/cp-test_multinode-524008.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp multinode-524008:/home/docker/cp-test.txt multinode-524008-m02:/home/docker/cp-test_multinode-524008_multinode-524008-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m02 "sudo cat /home/docker/cp-test_multinode-524008_multinode-524008-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp multinode-524008:/home/docker/cp-test.txt multinode-524008-m03:/home/docker/cp-test_multinode-524008_multinode-524008-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m03 "sudo cat /home/docker/cp-test_multinode-524008_multinode-524008-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp testdata/cp-test.txt multinode-524008-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp multinode-524008-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile961062981/001/cp-test_multinode-524008-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp multinode-524008-m02:/home/docker/cp-test.txt multinode-524008:/home/docker/cp-test_multinode-524008-m02_multinode-524008.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008 "sudo cat /home/docker/cp-test_multinode-524008-m02_multinode-524008.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp multinode-524008-m02:/home/docker/cp-test.txt multinode-524008-m03:/home/docker/cp-test_multinode-524008-m02_multinode-524008-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m03 "sudo cat /home/docker/cp-test_multinode-524008-m02_multinode-524008-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp testdata/cp-test.txt multinode-524008-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp multinode-524008-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile961062981/001/cp-test_multinode-524008-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp multinode-524008-m03:/home/docker/cp-test.txt multinode-524008:/home/docker/cp-test_multinode-524008-m03_multinode-524008.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008 "sudo cat /home/docker/cp-test_multinode-524008-m03_multinode-524008.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 cp multinode-524008-m03:/home/docker/cp-test.txt multinode-524008-m02:/home/docker/cp-test_multinode-524008-m03_multinode-524008-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 ssh -n multinode-524008-m02 "sudo cat /home/docker/cp-test_multinode-524008-m03_multinode-524008-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-524008 node stop m03: (1.213846753s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-524008 status: exit status 7 (512.409006ms)

                                                
                                                
-- stdout --
	multinode-524008
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-524008-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-524008-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-524008 status --alsologtostderr: exit status 7 (510.391011ms)

                                                
                                                
-- stdout --
	multinode-524008
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-524008-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-524008-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 18:00:34.525103  577310 out.go:345] Setting OutFile to fd 1 ...
	I0414 18:00:34.525634  577310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 18:00:34.525678  577310 out.go:358] Setting ErrFile to fd 2...
	I0414 18:00:34.525699  577310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 18:00:34.525990  577310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
	I0414 18:00:34.526211  577310 out.go:352] Setting JSON to false
	I0414 18:00:34.526285  577310 mustload.go:65] Loading cluster: multinode-524008
	I0414 18:00:34.526366  577310 notify.go:220] Checking for updates...
	I0414 18:00:34.527643  577310 config.go:182] Loaded profile config "multinode-524008": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 18:00:34.527710  577310 status.go:174] checking status of multinode-524008 ...
	I0414 18:00:34.528414  577310 cli_runner.go:164] Run: docker container inspect multinode-524008 --format={{.State.Status}}
	I0414 18:00:34.546641  577310 status.go:371] multinode-524008 host status = "Running" (err=<nil>)
	I0414 18:00:34.546664  577310 host.go:66] Checking if "multinode-524008" exists ...
	I0414 18:00:34.546971  577310 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-524008
	I0414 18:00:34.577690  577310 host.go:66] Checking if "multinode-524008" exists ...
	I0414 18:00:34.578077  577310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 18:00:34.578135  577310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-524008
	I0414 18:00:34.597165  577310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33302 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/multinode-524008/id_rsa Username:docker}
	I0414 18:00:34.687711  577310 ssh_runner.go:195] Run: systemctl --version
	I0414 18:00:34.692078  577310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 18:00:34.703624  577310 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 18:00:34.767548  577310 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-04-14 18:00:34.756014561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 18:00:34.768112  577310 kubeconfig.go:125] found "multinode-524008" server: "https://192.168.58.2:8443"
	I0414 18:00:34.768205  577310 api_server.go:166] Checking apiserver status ...
	I0414 18:00:34.768283  577310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 18:00:34.780656  577310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	I0414 18:00:34.790767  577310 api_server.go:182] apiserver freezer: "4:freezer:/docker/d9a6982c06cf5a767816fa56281bc47aed7b186cdfa6b1e606ccec9218909159/crio/crio-f2e37b82284405c88167ef6b3148734657300d6b0b857844bb1d668d1c292eb8"
	I0414 18:00:34.790841  577310 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d9a6982c06cf5a767816fa56281bc47aed7b186cdfa6b1e606ccec9218909159/crio/crio-f2e37b82284405c88167ef6b3148734657300d6b0b857844bb1d668d1c292eb8/freezer.state
	I0414 18:00:34.799830  577310 api_server.go:204] freezer state: "THAWED"
	I0414 18:00:34.799860  577310 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0414 18:00:34.807811  577310 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0414 18:00:34.807839  577310 status.go:463] multinode-524008 apiserver status = Running (err=<nil>)
	I0414 18:00:34.807850  577310 status.go:176] multinode-524008 status: &{Name:multinode-524008 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 18:00:34.807870  577310 status.go:174] checking status of multinode-524008-m02 ...
	I0414 18:00:34.808201  577310 cli_runner.go:164] Run: docker container inspect multinode-524008-m02 --format={{.State.Status}}
	I0414 18:00:34.825422  577310 status.go:371] multinode-524008-m02 host status = "Running" (err=<nil>)
	I0414 18:00:34.825457  577310 host.go:66] Checking if "multinode-524008-m02" exists ...
	I0414 18:00:34.825765  577310 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-524008-m02
	I0414 18:00:34.845089  577310 host.go:66] Checking if "multinode-524008-m02" exists ...
	I0414 18:00:34.845416  577310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 18:00:34.845467  577310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-524008-m02
	I0414 18:00:34.862956  577310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33307 SSHKeyPath:/home/jenkins/minikube-integration/20201-457936/.minikube/machines/multinode-524008-m02/id_rsa Username:docker}
	I0414 18:00:34.951680  577310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 18:00:34.962977  577310 status.go:176] multinode-524008-m02 status: &{Name:multinode-524008-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0414 18:00:34.963011  577310 status.go:174] checking status of multinode-524008-m03 ...
	I0414 18:00:34.963344  577310 cli_runner.go:164] Run: docker container inspect multinode-524008-m03 --format={{.State.Status}}
	I0414 18:00:34.981650  577310 status.go:371] multinode-524008-m03 host status = "Stopped" (err=<nil>)
	I0414 18:00:34.981672  577310 status.go:384] host is not running, skipping remaining checks
	I0414 18:00:34.981679  577310 status.go:176] multinode-524008-m03 status: &{Name:multinode-524008-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-524008 node start m03 -v=7 --alsologtostderr: (9.034870771s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-524008
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-524008
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-524008: (24.824981459s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-524008 --wait=true -v=8 --alsologtostderr
E0414 18:01:36.252138  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-524008 --wait=true -v=8 --alsologtostderr: (1m3.329922225s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-524008
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-524008 node delete m03: (4.635411234s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-524008 stop: (23.681965093s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-524008 status: exit status 7 (118.600065ms)

                                                
                                                
-- stdout --
	multinode-524008
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-524008-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-524008 status --alsologtostderr: exit status 7 (115.518484ms)

                                                
                                                
-- stdout --
	multinode-524008
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-524008-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 18:02:42.184130  584938 out.go:345] Setting OutFile to fd 1 ...
	I0414 18:02:42.184277  584938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 18:02:42.184283  584938 out.go:358] Setting ErrFile to fd 2...
	I0414 18:02:42.184287  584938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 18:02:42.184587  584938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
	I0414 18:02:42.184886  584938 out.go:352] Setting JSON to false
	I0414 18:02:42.184952  584938 mustload.go:65] Loading cluster: multinode-524008
	I0414 18:02:42.185094  584938 notify.go:220] Checking for updates...
	I0414 18:02:42.185379  584938 config.go:182] Loaded profile config "multinode-524008": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 18:02:42.185413  584938 status.go:174] checking status of multinode-524008 ...
	I0414 18:02:42.185955  584938 cli_runner.go:164] Run: docker container inspect multinode-524008 --format={{.State.Status}}
	I0414 18:02:42.210373  584938 status.go:371] multinode-524008 host status = "Stopped" (err=<nil>)
	I0414 18:02:42.210401  584938 status.go:384] host is not running, skipping remaining checks
	I0414 18:02:42.210410  584938 status.go:176] multinode-524008 status: &{Name:multinode-524008 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 18:02:42.210470  584938 status.go:174] checking status of multinode-524008-m02 ...
	I0414 18:02:42.210799  584938 cli_runner.go:164] Run: docker container inspect multinode-524008-m02 --format={{.State.Status}}
	I0414 18:02:42.242527  584938 status.go:371] multinode-524008-m02 host status = "Stopped" (err=<nil>)
	I0414 18:02:42.242555  584938 status.go:384] host is not running, skipping remaining checks
	I0414 18:02:42.242561  584938 status.go:176] multinode-524008-m02 status: &{Name:multinode-524008-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (62.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-524008 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0414 18:03:34.152005  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-524008 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m2.20013883s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524008 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (62.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-524008
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-524008-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-524008-m02 --driver=docker  --container-runtime=crio: exit status 14 (103.717866ms)

                                                
                                                
-- stdout --
	* [multinode-524008-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20201
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-524008-m02' is duplicated with machine name 'multinode-524008-m02' in profile 'multinode-524008'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-524008-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-524008-m03 --driver=docker  --container-runtime=crio: (29.597643609s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-524008
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-524008: exit status 80 (347.78543ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-524008 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-524008-m03 already exists in multinode-524008-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-524008-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-524008-m03: (1.970156404s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.10s)

                                                
                                    
x
+
TestPreload (126.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-902350 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0414 18:05:13.187532  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-902350 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.172176364s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-902350 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-902350 image pull gcr.io/k8s-minikube/busybox: (3.166423134s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-902350
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-902350: (5.830904912s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-902350 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-902350 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.738724864s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-902350 image list
helpers_test.go:175: Cleaning up "test-preload-902350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-902350
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-902350: (2.413999743s)
--- PASS: TestPreload (126.58s)

                                                
                                    
x
+
TestScheduledStopUnix (107.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-468469 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-468469 --memory=2048 --driver=docker  --container-runtime=crio: (31.483974298s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-468469 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-468469 -n scheduled-stop-468469
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-468469 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0414 18:07:00.204612  463312 retry.go:31] will retry after 59.31µs: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.206068  463312 retry.go:31] will retry after 203.911µs: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.206412  463312 retry.go:31] will retry after 278.496µs: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.207565  463312 retry.go:31] will retry after 249.295µs: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.208746  463312 retry.go:31] will retry after 731.738µs: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.209915  463312 retry.go:31] will retry after 637.996µs: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.211131  463312 retry.go:31] will retry after 1.00539ms: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.212250  463312 retry.go:31] will retry after 1.160093ms: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.214521  463312 retry.go:31] will retry after 1.81224ms: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.216776  463312 retry.go:31] will retry after 4.09397ms: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.222043  463312 retry.go:31] will retry after 6.018701ms: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.228227  463312 retry.go:31] will retry after 8.607818ms: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.237490  463312 retry.go:31] will retry after 12.309164ms: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.250725  463312 retry.go:31] will retry after 23.744256ms: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.276056  463312 retry.go:31] will retry after 38.19579ms: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
I0414 18:07:00.315336  463312 retry.go:31] will retry after 40.697334ms: open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/scheduled-stop-468469/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-468469 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-468469 -n scheduled-stop-468469
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-468469
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-468469 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-468469
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-468469: exit status 7 (74.423772ms)

                                                
                                                
-- stdout --
	scheduled-stop-468469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-468469 -n scheduled-stop-468469
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-468469 -n scheduled-stop-468469: exit status 7 (66.557954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-468469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-468469
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-468469: (4.597244673s)
--- PASS: TestScheduledStopUnix (107.77s)

                                                
                                    
x
+
TestInsufficientStorage (10.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-844108 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-844108 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.431729803s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd453b35-0cd9-4afc-a172-bed376876cae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-844108] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a67624fa-7b96-43e3-b538-15a5530c9174","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20201"}}
	{"specversion":"1.0","id":"a701fdac-a768-48fa-b092-70e983347d50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"50ac553c-0b41-4c73-8b33-3eb868d69cb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig"}}
	{"specversion":"1.0","id":"b693e8af-c593-4f14-ac8a-8e238459ccdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube"}}
	{"specversion":"1.0","id":"e13587fe-7a87-48a5-b85d-66ca8b8f6ec0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5dd27c64-bbc6-4fe1-91d2-cdb7ec0c8db1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"94310a9e-a3ee-4bc2-bc53-a602a812f160","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"af480e06-766b-4c09-a5e0-c45380d70a51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a1b6490d-d24b-47b6-9083-2f3f2829fa8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"db960225-b467-4cbb-b32e-f1f6e1cbe603","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fa359e7b-1d1e-42cf-9666-b2d417cdce8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-844108\" primary control-plane node in \"insufficient-storage-844108\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c2b9204-6288-437e-95bc-c0978101a9e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1744107393-20604 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"33361c6b-22ac-4903-9092-5d5169918b06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"40d58b23-ffef-4abd-863b-f9d55dab58a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-844108 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-844108 --output=json --layout=cluster: exit status 7 (283.93296ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-844108","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-844108","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 18:08:24.575193  602774 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-844108" does not appear in /home/jenkins/minikube-integration/20201-457936/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-844108 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-844108 --output=json --layout=cluster: exit status 7 (283.751938ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-844108","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-844108","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 18:08:24.858912  602837 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-844108" does not appear in /home/jenkins/minikube-integration/20201-457936/kubeconfig
	E0414 18:08:24.869188  602837 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/insufficient-storage-844108/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-844108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-844108
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-844108: (1.906727091s)
--- PASS: TestInsufficientStorage (10.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (109.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3338463428 start -p running-upgrade-385465 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3338463428 start -p running-upgrade-385465 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.181708268s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-385465 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0414 18:15:13.187106  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-385465 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m10.475221732s)
helpers_test.go:175: Cleaning up "running-upgrade-385465" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-385465
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-385465: (2.790444116s)
--- PASS: TestRunningBinaryUpgrade (109.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-448725 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-448725 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m13.850196417s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-448725
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-448725: (1.22427358s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-448725 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-448725 status --format={{.Host}}: exit status 7 (69.645042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-448725 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0414 18:11:37.224774  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-448725 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.284170969s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-448725 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-448725 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-448725 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (146.881409ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-448725] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20201
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-448725
	    minikube start -p kubernetes-upgrade-448725 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4487252 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-448725 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-448725 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-448725 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.984720132s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-448725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-448725
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-448725: (2.20256541s)
--- PASS: TestKubernetesUpgrade (389.92s)

                                                
                                    
x
+
TestMissingContainerUpgrade (183.66s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2316497603 start -p missing-upgrade-738794 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2316497603 start -p missing-upgrade-738794 --memory=2200 --driver=docker  --container-runtime=crio: (1m24.888912233s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-738794
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-738794: (10.449701252s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-738794
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-738794 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-738794 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m25.34262349s)
helpers_test.go:175: Cleaning up "missing-upgrade-738794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-738794
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-738794: (2.092649713s)
--- PASS: TestMissingContainerUpgrade (183.66s)

                                                
                                    
x
+
TestPause/serial/Start (61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-463464 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-463464 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m1.001606572s)
--- PASS: TestPause/serial/Start (61.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-984394 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-984394 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (114.640359ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-984394] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20201
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-984394 --driver=docker  --container-runtime=crio
E0414 18:08:34.152081  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-984394 --driver=docker  --container-runtime=crio: (38.791664555s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-984394 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-984394 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-984394 --no-kubernetes --driver=docker  --container-runtime=crio: (16.749674927s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-984394 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-984394 status -o json: exit status 2 (293.435072ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-984394","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-984394
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-984394: (1.969438786s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-984394 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-984394 --no-kubernetes --driver=docker  --container-runtime=crio: (6.384093178s)
--- PASS: TestNoKubernetes/serial/Start (6.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-463464 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-463464 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.836686472s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-984394 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-984394 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.780076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-984394
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-984394: (1.207241778s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-984394 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-984394 --driver=docker  --container-runtime=crio: (6.603625471s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-984394 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-984394 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.768701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/Pause (1.08s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-463464 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-463464 --alsologtostderr -v=5: (1.07831769s)
--- PASS: TestPause/serial/Pause (1.08s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-463464 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-463464 --output=json --layout=cluster: exit status 2 (409.778647ms)

                                                
                                                
-- stdout --
	{"Name":"pause-463464","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-463464","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-463464 --alsologtostderr -v=5
E0414 18:10:13.188052  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-463464 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-463464 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-463464 --alsologtostderr -v=5: (2.965604826s)
--- PASS: TestPause/serial/DeletePaused (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-463464
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-463464: exit status 1 (19.103665ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-463464: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2569722288 start -p stopped-upgrade-821934 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2569722288 start -p stopped-upgrade-821934 --memory=2200 --vm-driver=docker  --container-runtime=crio: (32.510705954s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2569722288 -p stopped-upgrade-821934 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2569722288 -p stopped-upgrade-821934 stop: (2.6308008s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-821934 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0414 18:13:34.152005  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-821934 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.388373333s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-821934
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-821934: (1.035703547s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-950386 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-950386 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (192.395418ms)

                                                
                                                
-- stdout --
	* [false-950386] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20201
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 18:16:51.903598  643902 out.go:345] Setting OutFile to fd 1 ...
	I0414 18:16:51.903837  643902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 18:16:51.903872  643902 out.go:358] Setting ErrFile to fd 2...
	I0414 18:16:51.903910  643902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 18:16:51.904203  643902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20201-457936/.minikube/bin
	I0414 18:16:51.904727  643902 out.go:352] Setting JSON to false
	I0414 18:16:51.905739  643902 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10758,"bootTime":1744643854,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0414 18:16:51.905833  643902 start.go:139] virtualization:  
	I0414 18:16:51.909416  643902 out.go:177] * [false-950386] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0414 18:16:51.913329  643902 out.go:177]   - MINIKUBE_LOCATION=20201
	I0414 18:16:51.913550  643902 notify.go:220] Checking for updates...
	I0414 18:16:51.919258  643902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 18:16:51.922364  643902 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20201-457936/kubeconfig
	I0414 18:16:51.925360  643902 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20201-457936/.minikube
	I0414 18:16:51.928412  643902 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0414 18:16:51.931426  643902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 18:16:51.934952  643902 config.go:182] Loaded profile config "cert-expiration-818262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 18:16:51.935071  643902 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 18:16:51.959234  643902 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 18:16:51.959366  643902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 18:16:52.022660  643902 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-14 18:16:52.013186266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0414 18:16:52.022769  643902 docker.go:318] overlay module found
	I0414 18:16:52.025916  643902 out.go:177] * Using the docker driver based on user configuration
	I0414 18:16:52.028862  643902 start.go:297] selected driver: docker
	I0414 18:16:52.028884  643902 start.go:901] validating driver "docker" against <nil>
	I0414 18:16:52.028911  643902 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 18:16:52.032398  643902 out.go:201] 
	W0414 18:16:52.035432  643902 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0414 18:16:52.038303  643902 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-950386 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-950386" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20201-457936/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 18:16:24 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-818262
contexts:
- context:
cluster: cert-expiration-818262
extensions:
- extension:
last-update: Mon, 14 Apr 2025 18:16:24 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-818262
name: cert-expiration-818262
current-context: cert-expiration-818262
kind: Config
preferences: {}
users:
- name: cert-expiration-818262
user:
client-certificate: /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/cert-expiration-818262/client.crt
client-key: /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/cert-expiration-818262/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-950386

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-950386"

                                                
                                                
----------------------- debugLogs end: false-950386 [took: 3.425024479s] --------------------------------
helpers_test.go:175: Cleaning up "false-950386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-950386
--- PASS: TestNetworkPlugins/group/false (3.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (151.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-823963 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-823963 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m31.607542107s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (151.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-208687 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 18:20:13.187548  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-208687 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (1m6.023125859s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-208687 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b9baebc8-c87c-49a9-a640-eea127707589] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b9baebc8-c87c-49a9-a640-eea127707589] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00388532s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-208687 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-208687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-208687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.301193153s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-208687 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-208687 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-208687 --alsologtostderr -v=3: (12.010978639s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-823963 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3996fba8-fc42-4c96-97c4-a10417b1e9d9] Pending
helpers_test.go:344: "busybox" [3996fba8-fc42-4c96-97c4-a10417b1e9d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3996fba8-fc42-4c96-97c4-a10417b1e9d9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003248744s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-823963 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-208687 -n no-preload-208687
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-208687 -n no-preload-208687: exit status 7 (77.638267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-208687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (282.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-208687 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-208687 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m42.401117088s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-208687 -n no-preload-208687
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (282.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-823963 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-823963 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.1785269s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-823963 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-823963 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-823963 --alsologtostderr -v=3: (12.153614669s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-823963 -n old-k8s-version-823963
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-823963 -n old-k8s-version-823963: exit status 7 (76.805016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-823963 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (146.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-823963 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0414 18:23:34.152031  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-823963 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m25.649659136s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-823963 -n old-k8s-version-823963
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (146.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d5nml" [c7e537c1-df67-4086-bd78-d39b8c3f276b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003392196s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d5nml" [c7e537c1-df67-4086-bd78-d39b8c3f276b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006142177s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-823963 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-823963 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-823963 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-823963 -n old-k8s-version-823963
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-823963 -n old-k8s-version-823963: exit status 2 (318.382848ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-823963 -n old-k8s-version-823963
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-823963 -n old-k8s-version-823963: exit status 2 (311.17612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-823963 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-823963 -n old-k8s-version-823963
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-823963 -n old-k8s-version-823963
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-250094 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-250094 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (53.406273163s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-250094 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [de610fb0-cc1e-4118-9dae-8ee1a6fbed41] Pending
E0414 18:25:13.187958  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [de610fb0-cc1e-4118-9dae-8ee1a6fbed41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [de610fb0-cc1e-4118-9dae-8ee1a6fbed41] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004006973s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-250094 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-250094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-250094 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-250094 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-250094 --alsologtostderr -v=3: (11.959080545s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-250094 -n embed-certs-250094
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-250094 -n embed-certs-250094: exit status 7 (77.409037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-250094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-250094 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-250094 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m26.733388757s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-250094 -n embed-certs-250094
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-flx69" [12de118e-cd64-4882-9d83-708157669f70] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003652069s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-flx69" [12de118e-cd64-4882-9d83-708157669f70] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00365726s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-208687 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-208687 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-208687 --alsologtostderr -v=1
E0414 18:26:12.897010  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:26:12.903345  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:26:12.914669  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:26:12.935950  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:26:12.977290  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:26:13.059379  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:26:13.221163  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-208687 -n no-preload-208687
E0414 18:26:13.542981  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-208687 -n no-preload-208687: exit status 2 (340.046543ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-208687 -n no-preload-208687
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-208687 -n no-preload-208687: exit status 2 (329.210801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-208687 --alsologtostderr -v=1
E0414 18:26:14.184747  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-208687 -n no-preload-208687
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-208687 -n no-preload-208687
E0414 18:26:15.466727  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-730372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 18:26:23.150633  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:26:33.392184  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:26:53.874277  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-730372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (51.371730888s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-730372 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f158fbbd-6b1c-4d8b-b10d-4dfa2c2aeaba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f158fbbd-6b1c-4d8b-b10d-4dfa2c2aeaba] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.002704571s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-730372 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-730372 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-730372 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-730372 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-730372 --alsologtostderr -v=3: (11.964343184s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-730372 -n default-k8s-diff-port-730372
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-730372 -n default-k8s-diff-port-730372: exit status 7 (73.278327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-730372 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (278.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-730372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 18:27:34.836001  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:28:17.226586  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:28:34.151961  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:28:56.757993  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-730372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m38.242308584s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-730372 -n default-k8s-diff-port-730372
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (278.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-p8sfs" [fe9c2dde-ab04-4aca-8ff3-23afcbe964b1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003868864s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-p8sfs" [fe9c2dde-ab04-4aca-8ff3-23afcbe964b1] Running
E0414 18:30:13.187392  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003677218s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-250094 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-250094 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-250094 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-250094 -n embed-certs-250094
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-250094 -n embed-certs-250094: exit status 2 (336.334669ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-250094 -n embed-certs-250094
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-250094 -n embed-certs-250094: exit status 2 (323.27335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-250094 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-250094 -n embed-certs-250094
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-250094 -n embed-certs-250094
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-807624 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 18:30:55.404761  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:30:55.411191  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:30:55.422592  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:30:55.443993  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:30:55.485327  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:30:55.566760  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:30:55.728379  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:30:56.049837  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:30:56.694264  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:30:57.975689  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-807624 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (37.161750448s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-807624 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-807624 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.133647752s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-807624 --alsologtostderr -v=3
E0414 18:31:00.537996  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-807624 --alsologtostderr -v=3: (1.25570738s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-807624 -n newest-cni-807624
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-807624 -n newest-cni-807624: exit status 7 (80.644988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-807624 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-807624 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 18:31:05.660330  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:31:12.897248  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:31:15.901665  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-807624 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (16.339757427s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-807624 -n newest-cni-807624
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-807624 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-807624 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-807624 -n newest-cni-807624
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-807624 -n newest-cni-807624: exit status 2 (331.142357ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-807624 -n newest-cni-807624
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-807624 -n newest-cni-807624: exit status 2 (317.93067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-807624 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-807624 -n newest-cni-807624
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-807624 -n newest-cni-807624
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0414 18:31:36.383343  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:31:40.599645  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/old-k8s-version-823963/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (50.945946431s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5vzbp" [34924da6-c9fc-4081-ac93-3e5fee6ecfc6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004472175s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-950386 "pgrep -a kubelet"
I0414 18:32:14.752073  463312 config.go:182] Loaded profile config "auto-950386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-950386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vbhnk" [927d9769-455c-465c-925f-328accd3316d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 18:32:17.345165  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-vbhnk" [927d9769-455c-465c-925f-328accd3316d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003774874s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5vzbp" [34924da6-c9fc-4081-ac93-3e5fee6ecfc6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003175341s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-730372 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-730372 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-730372 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-730372 -n default-k8s-diff-port-730372
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-730372 -n default-k8s-diff-port-730372: exit status 2 (333.355929ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-730372 -n default-k8s-diff-port-730372
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-730372 -n default-k8s-diff-port-730372: exit status 2 (321.092867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-730372 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-730372 -n default-k8s-diff-port-730372
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-730372 -n default-k8s-diff-port-730372
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.32s)
E0414 18:37:20.133714  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/auto-950386/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:20.483627  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:25.255681  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/auto-950386/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:30.725412  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-950386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m0.519483275s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.324182107s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7jjgh" [f00fc601-4fb5-425f-8181-79fd0b8846fa] Running
E0414 18:33:34.152420  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/addons-225375/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005250395s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-950386 "pgrep -a kubelet"
I0414 18:33:37.752661  463312 config.go:182] Loaded profile config "kindnet-950386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-950386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-mqskc" [f101ccca-f247-481f-a5bd-12cf36305a04] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 18:33:39.266604  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-mqskc" [f101ccca-f247-481f-a5bd-12cf36305a04] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00282222s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-950386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-df9dk" [27b672a6-d7af-4521-88ee-3b8f5288528a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004113392s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-950386 "pgrep -a kubelet"
I0414 18:34:08.442706  463312 config.go:182] Loaded profile config "calico-950386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-950386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-z65s4" [4963d3a5-cc9d-4aa4-8ba4-9e59b57ecd4d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-z65s4" [4963d3a5-cc9d-4aa4-8ba4-9e59b57ecd4d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006520998s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (58.198043625s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-950386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (49.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0414 18:34:56.255908  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (49.416309256s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (49.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-950386 "pgrep -a kubelet"
I0414 18:35:11.626305  463312 config.go:182] Loaded profile config "custom-flannel-950386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-950386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-fb8d2" [f44ce6ca-a347-40af-82a8-8a1e735f929d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 18:35:13.187554  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/functional-666858/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-fb8d2" [f44ce6ca-a347-40af-82a8-8a1e735f929d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004313766s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-950386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-950386 "pgrep -a kubelet"
I0414 18:35:36.749847  463312 config.go:182] Loaded profile config "enable-default-cni-950386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-950386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5dz4s" [7aa96239-b575-42e9-b389-ba023dcb79e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5dz4s" [7aa96239-b575-42e9-b389-ba023dcb79e8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003420627s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.102239771s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-950386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0414 18:36:23.108229  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/no-preload-208687/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-950386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (48.975556956s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bk692" [389639fb-e950-43e7-a237-d8c34e319229] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003393448s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-950386 "pgrep -a kubelet"
I0414 18:36:42.965094  463312 config.go:182] Loaded profile config "flannel-950386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-950386 replace --force -f testdata/netcat-deployment.yaml
I0414 18:36:43.461137  463312 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-c6d5r" [953b4189-57fe-476f-a12d-897d3014503b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-c6d5r" [953b4189-57fe-476f-a12d-897d3014503b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004223854s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-950386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-950386 "pgrep -a kubelet"
I0414 18:37:04.592789  463312 config.go:182] Loaded profile config "bridge-950386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-950386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-29snf" [f1de3294-a5e0-4b1f-b6c5-bbb467ad568f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 18:37:10.229963  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:10.236240  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:10.247571  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:10.268924  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:10.311185  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:10.392478  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:10.554178  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:10.876063  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:11.518204  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-29snf" [f1de3294-a5e0-4b1f-b6c5-bbb467ad568f] Running
E0414 18:37:12.800291  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:15.001673  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/auto-950386/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:15.008502  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/auto-950386/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:15.019913  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/auto-950386/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:15.042114  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/auto-950386/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:15.083646  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/auto-950386/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:15.165024  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/auto-950386/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:15.326488  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/auto-950386/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:15.361935  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/default-k8s-diff-port-730372/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:37:15.648013  463312 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/auto-950386/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003864126s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-950386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-950386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (32/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-817347 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-817347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-817347
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-225375 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-999952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-999952
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-950386 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-950386" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20201-457936/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 18:16:24 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-818262
contexts:
- context:
cluster: cert-expiration-818262
extensions:
- extension:
last-update: Mon, 14 Apr 2025 18:16:24 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-818262
name: cert-expiration-818262
current-context: cert-expiration-818262
kind: Config
preferences: {}
users:
- name: cert-expiration-818262
user:
client-certificate: /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/cert-expiration-818262/client.crt
client-key: /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/cert-expiration-818262/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-950386

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-950386"

                                                
                                                
----------------------- debugLogs end: kubenet-950386 [took: 3.500735703s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-950386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-950386
--- SKIP: TestNetworkPlugins/group/kubenet (3.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-950386 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-950386" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20201-457936/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 18:16:24 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-818262
contexts:
- context:
cluster: cert-expiration-818262
extensions:
- extension:
last-update: Mon, 14 Apr 2025 18:16:24 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-818262
name: cert-expiration-818262
current-context: cert-expiration-818262
kind: Config
preferences: {}
users:
- name: cert-expiration-818262
user:
client-certificate: /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/cert-expiration-818262/client.crt
client-key: /home/jenkins/minikube-integration/20201-457936/.minikube/profiles/cert-expiration-818262/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-950386

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-950386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950386"

                                                
                                                
----------------------- debugLogs end: cilium-950386 [took: 3.856031058s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-950386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-950386
--- SKIP: TestNetworkPlugins/group/cilium (4.02s)

                                                
                                    
Copied to clipboard