Test Report: Docker_Linux_containerd_arm64 19598

                    
                      cb70ad94d69a229bf8d3511a5a00af396fa2386e:2024-09-10:36157
                    
                

Test fail (1/328)

Order failed test Duration
29 TestAddons/serial/Volcano 200.25
x
+
TestAddons/serial/Volcano (200.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 47.00594ms
addons_test.go:897: volcano-scheduler stabilized in 47.118784ms
addons_test.go:905: volcano-admission stabilized in 47.170427ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-7x7qn" [816fa8f8-1192-45b5-9bed-7252105431b6] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003498141s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-vnlcw" [263b789a-9c92-4d3f-a989-44233f263bc6] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003684328s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-kmkm9" [3bd9f8b8-0fca-4ca3-805c-bef2a8ff9000] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00338406s
addons_test.go:932: (dbg) Run:  kubectl --context addons-827965 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-827965 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-827965 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [518e9363-0a78-41b9-a2da-1a6878f3064c] Pending
helpers_test.go:344: "test-job-nginx-0" [518e9363-0a78-41b9-a2da-1a6878f3064c] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-827965 -n addons-827965
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-10 18:28:57.820944653 +0000 UTC m=+437.483810952
addons_test.go:964: (dbg) Run:  kubectl --context addons-827965 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-827965 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-67e30741-7f56-4366-8e9b-afe851782951
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8d4x (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-x8d4x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m58s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-827965 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-827965 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-827965
helpers_test.go:235: (dbg) docker inspect addons-827965:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf446859967f3bdfb7006b4de6b6d33899631b0c0c446cb6b1d3bf743086d547",
	        "Created": "2024-09-10T18:22:25.143614829Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299907,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-10T18:22:25.290404508Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a4261f15fdf40db09c0b78a1feabe6bd85433327166d5c98909d23a556dff45f",
	        "ResolvConfPath": "/var/lib/docker/containers/bf446859967f3bdfb7006b4de6b6d33899631b0c0c446cb6b1d3bf743086d547/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf446859967f3bdfb7006b4de6b6d33899631b0c0c446cb6b1d3bf743086d547/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf446859967f3bdfb7006b4de6b6d33899631b0c0c446cb6b1d3bf743086d547/hosts",
	        "LogPath": "/var/lib/docker/containers/bf446859967f3bdfb7006b4de6b6d33899631b0c0c446cb6b1d3bf743086d547/bf446859967f3bdfb7006b4de6b6d33899631b0c0c446cb6b1d3bf743086d547-json.log",
	        "Name": "/addons-827965",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-827965:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-827965",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/136d97dff90f7e72975c374bafec6856bc4d3c77c9180362c545946716b3f254-init/diff:/var/lib/docker/overlay2/3154bb76135996f30b899383b0ddcba9ada28aba984fcbfc04c25722b32d40d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/136d97dff90f7e72975c374bafec6856bc4d3c77c9180362c545946716b3f254/merged",
	                "UpperDir": "/var/lib/docker/overlay2/136d97dff90f7e72975c374bafec6856bc4d3c77c9180362c545946716b3f254/diff",
	                "WorkDir": "/var/lib/docker/overlay2/136d97dff90f7e72975c374bafec6856bc4d3c77c9180362c545946716b3f254/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-827965",
	                "Source": "/var/lib/docker/volumes/addons-827965/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-827965",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-827965",
	                "name.minikube.sigs.k8s.io": "addons-827965",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7494f1acf54448e1778f99c7d4dba157d491c7b7b6d3797b964ee2645690dac6",
	            "SandboxKey": "/var/run/docker/netns/7494f1acf544",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-827965": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a1701273ceeaf465994dac02434fbb3e2e0f3e3793c5eaddb136dae8f2cbd8e6",
	                    "EndpointID": "4dd375e9535fe32efd4aced7caab2288de86d68393dc2736518f9d64403d2bb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-827965",
	                        "bf446859967f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-827965 -n addons-827965
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-827965 logs -n 25: (1.76935201s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-154248   | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC |                     |
	|         | -p download-only-154248              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC | 10 Sep 24 18:21 UTC |
	| delete  | -p download-only-154248              | download-only-154248   | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC | 10 Sep 24 18:21 UTC |
	| start   | -o=json --download-only              | download-only-256966   | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC |                     |
	|         | -p download-only-256966              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC | 10 Sep 24 18:21 UTC |
	| delete  | -p download-only-256966              | download-only-256966   | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC | 10 Sep 24 18:21 UTC |
	| delete  | -p download-only-154248              | download-only-154248   | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC | 10 Sep 24 18:21 UTC |
	| delete  | -p download-only-256966              | download-only-256966   | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC | 10 Sep 24 18:21 UTC |
	| start   | --download-only -p                   | download-docker-096745 | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC |                     |
	|         | download-docker-096745               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-096745            | download-docker-096745 | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC | 10 Sep 24 18:21 UTC |
	| start   | --download-only -p                   | binary-mirror-811787   | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC |                     |
	|         | binary-mirror-811787                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43871               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-811787              | binary-mirror-811787   | jenkins | v1.34.0 | 10 Sep 24 18:22 UTC | 10 Sep 24 18:22 UTC |
	| addons  | enable dashboard -p                  | addons-827965          | jenkins | v1.34.0 | 10 Sep 24 18:22 UTC |                     |
	|         | addons-827965                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-827965          | jenkins | v1.34.0 | 10 Sep 24 18:22 UTC |                     |
	|         | addons-827965                        |                        |         |         |                     |                     |
	| start   | -p addons-827965 --wait=true         | addons-827965          | jenkins | v1.34.0 | 10 Sep 24 18:22 UTC | 10 Sep 24 18:25 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:22:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:22:00.670298  299416 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:22:00.670536  299416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:22:00.670568  299416 out.go:358] Setting ErrFile to fd 2...
	I0910 18:22:00.670589  299416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:22:00.670869  299416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
	I0910 18:22:00.671372  299416 out.go:352] Setting JSON to false
	I0910 18:22:00.672318  299416 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7471,"bootTime":1725985050,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0910 18:22:00.672415  299416 start.go:139] virtualization:  
	I0910 18:22:00.675247  299416 out.go:177] * [addons-827965] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0910 18:22:00.678117  299416 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:22:00.678180  299416 notify.go:220] Checking for updates...
	I0910 18:22:00.682482  299416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:22:00.684876  299416 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	I0910 18:22:00.687124  299416 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	I0910 18:22:00.689206  299416 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0910 18:22:00.691933  299416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:22:00.694225  299416 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:22:00.724898  299416 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 18:22:00.725002  299416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:22:00.781435  299416 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-10 18:22:00.771840441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:22:00.781547  299416 docker.go:318] overlay module found
	I0910 18:22:00.784586  299416 out.go:177] * Using the docker driver based on user configuration
	I0910 18:22:00.786900  299416 start.go:297] selected driver: docker
	I0910 18:22:00.786921  299416 start.go:901] validating driver "docker" against <nil>
	I0910 18:22:00.786937  299416 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:22:00.787581  299416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:22:00.841907  299416 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-10 18:22:00.832157114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:22:00.842071  299416 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 18:22:00.842304  299416 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:22:00.844559  299416 out.go:177] * Using Docker driver with root privileges
	I0910 18:22:00.846502  299416 cni.go:84] Creating CNI manager for ""
	I0910 18:22:00.846529  299416 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0910 18:22:00.846542  299416 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 18:22:00.846641  299416 start.go:340] cluster config:
	{Name:addons-827965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-827965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:22:00.850020  299416 out.go:177] * Starting "addons-827965" primary control-plane node in "addons-827965" cluster
	I0910 18:22:00.851944  299416 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0910 18:22:00.853711  299416 out.go:177] * Pulling base image v0.0.45-1725963390-19606 ...
	I0910 18:22:00.855879  299416 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0910 18:22:00.855915  299416 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local docker daemon
	I0910 18:22:00.855971  299416 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-293262/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0910 18:22:00.855980  299416 cache.go:56] Caching tarball of preloaded images
	I0910 18:22:00.856067  299416 preload.go:172] Found /home/jenkins/minikube-integration/19598-293262/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 18:22:00.856077  299416 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0910 18:22:00.856433  299416 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/config.json ...
	I0910 18:22:00.856464  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/config.json: {Name:mk7ea9a9f70c5ead3a7ad7cd1e33a021f1259c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:00.871471  299416 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 to local cache
	I0910 18:22:00.871591  299416 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory
	I0910 18:22:00.871611  299416 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory, skipping pull
	I0910 18:22:00.871616  299416 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 exists in cache, skipping pull
	I0910 18:22:00.871625  299416 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 as a tarball
	I0910 18:22:00.871630  299416 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 from local cache
	I0910 18:22:18.353402  299416 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 from cached tarball
	I0910 18:22:18.353444  299416 cache.go:194] Successfully downloaded all kic artifacts
	I0910 18:22:18.353486  299416 start.go:360] acquireMachinesLock for addons-827965: {Name:mk046a56ae66d1ad75141faf6c2114110c689ecc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:22:18.354087  299416 start.go:364] duration metric: took 574.595µs to acquireMachinesLock for "addons-827965"
	I0910 18:22:18.354124  299416 start.go:93] Provisioning new machine with config: &{Name:addons-827965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-827965 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0910 18:22:18.354212  299416 start.go:125] createHost starting for "" (driver="docker")
	I0910 18:22:18.356450  299416 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0910 18:22:18.356703  299416 start.go:159] libmachine.API.Create for "addons-827965" (driver="docker")
	I0910 18:22:18.356741  299416 client.go:168] LocalClient.Create starting
	I0910 18:22:18.356889  299416 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19598-293262/.minikube/certs/ca.pem
	I0910 18:22:18.674631  299416 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19598-293262/.minikube/certs/cert.pem
	I0910 18:22:19.101615  299416 cli_runner.go:164] Run: docker network inspect addons-827965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0910 18:22:19.116611  299416 cli_runner.go:211] docker network inspect addons-827965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0910 18:22:19.116708  299416 network_create.go:284] running [docker network inspect addons-827965] to gather additional debugging logs...
	I0910 18:22:19.116730  299416 cli_runner.go:164] Run: docker network inspect addons-827965
	W0910 18:22:19.131802  299416 cli_runner.go:211] docker network inspect addons-827965 returned with exit code 1
	I0910 18:22:19.131836  299416 network_create.go:287] error running [docker network inspect addons-827965]: docker network inspect addons-827965: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-827965 not found
	I0910 18:22:19.131849  299416 network_create.go:289] output of [docker network inspect addons-827965]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-827965 not found
	
	** /stderr **
	I0910 18:22:19.131953  299416 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0910 18:22:19.148198  299416 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400178dd00}
	I0910 18:22:19.148237  299416 network_create.go:124] attempt to create docker network addons-827965 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0910 18:22:19.148299  299416 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-827965 addons-827965
	I0910 18:22:19.220475  299416 network_create.go:108] docker network addons-827965 192.168.49.0/24 created
	I0910 18:22:19.220505  299416 kic.go:121] calculated static IP "192.168.49.2" for the "addons-827965" container
	I0910 18:22:19.220592  299416 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0910 18:22:19.235784  299416 cli_runner.go:164] Run: docker volume create addons-827965 --label name.minikube.sigs.k8s.io=addons-827965 --label created_by.minikube.sigs.k8s.io=true
	I0910 18:22:19.252975  299416 oci.go:103] Successfully created a docker volume addons-827965
	I0910 18:22:19.253073  299416 cli_runner.go:164] Run: docker run --rm --name addons-827965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-827965 --entrypoint /usr/bin/test -v addons-827965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -d /var/lib
	I0910 18:22:20.846203  299416 cli_runner.go:217] Completed: docker run --rm --name addons-827965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-827965 --entrypoint /usr/bin/test -v addons-827965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -d /var/lib: (1.593083812s)
	I0910 18:22:20.846233  299416 oci.go:107] Successfully prepared a docker volume addons-827965
	I0910 18:22:20.846260  299416 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0910 18:22:20.846280  299416 kic.go:194] Starting extracting preloaded images to volume ...
	I0910 18:22:20.846365  299416 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19598-293262/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-827965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0910 18:22:25.069400  299416 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19598-293262/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-827965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.222995837s)
	I0910 18:22:25.069434  299416 kic.go:203] duration metric: took 4.223151004s to extract preloaded images to volume ...
	W0910 18:22:25.069589  299416 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0910 18:22:25.069701  299416 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0910 18:22:25.125001  299416 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-827965 --name addons-827965 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-827965 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-827965 --network addons-827965 --ip 192.168.49.2 --volume addons-827965:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9
	I0910 18:22:25.465653  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Running}}
	I0910 18:22:25.482921  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:25.502161  299416 cli_runner.go:164] Run: docker exec addons-827965 stat /var/lib/dpkg/alternatives/iptables
	I0910 18:22:25.578018  299416 oci.go:144] the created container "addons-827965" has a running status.
	I0910 18:22:25.578052  299416 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa...
	I0910 18:22:26.012846  299416 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0910 18:22:26.035678  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:26.066542  299416 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0910 18:22:26.066567  299416 kic_runner.go:114] Args: [docker exec --privileged addons-827965 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0910 18:22:26.147091  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:26.178384  299416 machine.go:93] provisionDockerMachine start ...
	I0910 18:22:26.178474  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:26.207325  299416 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:26.207589  299416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0910 18:22:26.207597  299416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:22:26.353481  299416 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-827965
	
	I0910 18:22:26.353575  299416 ubuntu.go:169] provisioning hostname "addons-827965"
	I0910 18:22:26.353673  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:26.373682  299416 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:26.373916  299416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0910 18:22:26.373927  299416 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-827965 && echo "addons-827965" | sudo tee /etc/hostname
	I0910 18:22:26.524097  299416 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-827965
	
	I0910 18:22:26.524177  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:26.545365  299416 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:26.545675  299416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0910 18:22:26.545701  299416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-827965' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-827965/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-827965' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:22:26.676623  299416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:22:26.676649  299416 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19598-293262/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-293262/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-293262/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-293262/.minikube}
	I0910 18:22:26.676678  299416 ubuntu.go:177] setting up certificates
	I0910 18:22:26.676687  299416 provision.go:84] configureAuth start
	I0910 18:22:26.676747  299416 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-827965
	I0910 18:22:26.694351  299416 provision.go:143] copyHostCerts
	I0910 18:22:26.694446  299416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-293262/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-293262/.minikube/ca.pem (1078 bytes)
	I0910 18:22:26.694564  299416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-293262/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-293262/.minikube/cert.pem (1123 bytes)
	I0910 18:22:26.694620  299416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-293262/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-293262/.minikube/key.pem (1675 bytes)
	I0910 18:22:26.694667  299416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-293262/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-293262/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-293262/.minikube/certs/ca-key.pem org=jenkins.addons-827965 san=[127.0.0.1 192.168.49.2 addons-827965 localhost minikube]
	I0910 18:22:27.443909  299416 provision.go:177] copyRemoteCerts
	I0910 18:22:27.443977  299416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:22:27.444022  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:27.461047  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:27.553459  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 18:22:27.576640  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 18:22:27.600203  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 18:22:27.625122  299416 provision.go:87] duration metric: took 948.41522ms to configureAuth
	I0910 18:22:27.625155  299416 ubuntu.go:193] setting minikube options for container-runtime
	I0910 18:22:27.625377  299416 config.go:182] Loaded profile config "addons-827965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0910 18:22:27.625399  299416 machine.go:96] duration metric: took 1.446997373s to provisionDockerMachine
	I0910 18:22:27.625407  299416 client.go:171] duration metric: took 9.268655605s to LocalClient.Create
	I0910 18:22:27.625425  299416 start.go:167] duration metric: took 9.268723732s to libmachine.API.Create "addons-827965"
	I0910 18:22:27.625438  299416 start.go:293] postStartSetup for "addons-827965" (driver="docker")
	I0910 18:22:27.625450  299416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:22:27.625523  299416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:22:27.625568  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:27.641578  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:27.736031  299416 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:22:27.739727  299416 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0910 18:22:27.739775  299416 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0910 18:22:27.739811  299416 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0910 18:22:27.739824  299416 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0910 18:22:27.739843  299416 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-293262/.minikube/addons for local assets ...
	I0910 18:22:27.739936  299416 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-293262/.minikube/files for local assets ...
	I0910 18:22:27.739974  299416 start.go:296] duration metric: took 114.530158ms for postStartSetup
	I0910 18:22:27.740495  299416 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-827965
	I0910 18:22:27.765177  299416 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/config.json ...
	I0910 18:22:27.765464  299416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:22:27.765507  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:27.781638  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:27.869876  299416 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0910 18:22:27.874643  299416 start.go:128] duration metric: took 9.520415102s to createHost
	I0910 18:22:27.874668  299416 start.go:83] releasing machines lock for "addons-827965", held for 9.520564099s
	I0910 18:22:27.874740  299416 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-827965
	I0910 18:22:27.891097  299416 ssh_runner.go:195] Run: cat /version.json
	I0910 18:22:27.891151  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:27.891273  299416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:22:27.891328  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:27.914156  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:27.915793  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:28.000568  299416 ssh_runner.go:195] Run: systemctl --version
	I0910 18:22:28.133548  299416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 18:22:28.137907  299416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0910 18:22:28.162760  299416 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0910 18:22:28.162922  299416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:22:28.192346  299416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0910 18:22:28.192419  299416 start.go:495] detecting cgroup driver to use...
	I0910 18:22:28.192468  299416 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0910 18:22:28.192545  299416 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 18:22:28.205018  299416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 18:22:28.216306  299416 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:22:28.216370  299416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:22:28.229884  299416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:22:28.244254  299416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:22:28.332355  299416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:22:28.425464  299416 docker.go:233] disabling docker service ...
	I0910 18:22:28.425532  299416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:22:28.449972  299416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:22:28.462051  299416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:22:28.545530  299416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:22:28.645773  299416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:22:28.657017  299416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:22:28.673085  299416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 18:22:28.682881  299416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 18:22:28.692509  299416 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 18:22:28.692599  299416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 18:22:28.702615  299416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:22:28.717552  299416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 18:22:28.727183  299416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 18:22:28.736768  299416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:22:28.745857  299416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 18:22:28.755611  299416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 18:22:28.765267  299416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 18:22:28.775335  299416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:22:28.784228  299416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:22:28.793127  299416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:22:28.878446  299416 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 18:22:29.022339  299416 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0910 18:22:29.022582  299416 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0910 18:22:29.027133  299416 start.go:563] Will wait 60s for crictl version
	I0910 18:22:29.027260  299416 ssh_runner.go:195] Run: which crictl
	I0910 18:22:29.031604  299416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:22:29.079260  299416 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.21
	RuntimeApiVersion:  v1
	I0910 18:22:29.079404  299416 ssh_runner.go:195] Run: containerd --version
	I0910 18:22:29.101682  299416 ssh_runner.go:195] Run: containerd --version
	I0910 18:22:29.125885  299416 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.21 ...
	I0910 18:22:29.127843  299416 cli_runner.go:164] Run: docker network inspect addons-827965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0910 18:22:29.142979  299416 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0910 18:22:29.146438  299416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:22:29.157189  299416 kubeadm.go:883] updating cluster {Name:addons-827965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-827965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:22:29.157325  299416 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0910 18:22:29.157392  299416 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:22:29.193264  299416 containerd.go:627] all images are preloaded for containerd runtime.
	I0910 18:22:29.193290  299416 containerd.go:534] Images already preloaded, skipping extraction
	I0910 18:22:29.193355  299416 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:22:29.231011  299416 containerd.go:627] all images are preloaded for containerd runtime.
	I0910 18:22:29.231036  299416 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:22:29.231045  299416 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0910 18:22:29.231181  299416 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-827965 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-827965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:22:29.231264  299416 ssh_runner.go:195] Run: sudo crictl info
	I0910 18:22:29.271441  299416 cni.go:84] Creating CNI manager for ""
	I0910 18:22:29.271465  299416 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0910 18:22:29.271476  299416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:22:29.271501  299416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-827965 NodeName:addons-827965 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:22:29.271650  299416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-827965"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:22:29.271727  299416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:22:29.280741  299416 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:22:29.280841  299416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:22:29.289701  299416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0910 18:22:29.307948  299416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:22:29.325880  299416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0910 18:22:29.343829  299416 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0910 18:22:29.347330  299416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:22:29.358164  299416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:22:29.435800  299416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:22:29.449768  299416 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965 for IP: 192.168.49.2
	I0910 18:22:29.449792  299416 certs.go:194] generating shared ca certs ...
	I0910 18:22:29.449809  299416 certs.go:226] acquiring lock for ca certs: {Name:mkeeaf8ea92dd449043c729cd08a2925979003d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:29.450635  299416 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-293262/.minikube/ca.key
	I0910 18:22:29.937410  299416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-293262/.minikube/ca.crt ...
	I0910 18:22:29.937446  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/ca.crt: {Name:mk84d02cf844c2d3f97b5d7456bc1659bb326c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:29.938126  299416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-293262/.minikube/ca.key ...
	I0910 18:22:29.938143  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/ca.key: {Name:mkeb3f86d25fd661119099d0e2b01d52176fe398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:29.938951  299416 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-293262/.minikube/proxy-client-ca.key
	I0910 18:22:30.407112  299416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-293262/.minikube/proxy-client-ca.crt ...
	I0910 18:22:30.407142  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/proxy-client-ca.crt: {Name:mkb833f366e5e5a750d863d27b9969c6bee2fc89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:30.407332  299416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-293262/.minikube/proxy-client-ca.key ...
	I0910 18:22:30.407345  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/proxy-client-ca.key: {Name:mkd59beb92c74f3ac7cf89d83a99b2f296824c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:30.407430  299416 certs.go:256] generating profile certs ...
	I0910 18:22:30.407494  299416 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.key
	I0910 18:22:30.407512  299416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt with IP's: []
	I0910 18:22:30.980239  299416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt ...
	I0910 18:22:30.980324  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: {Name:mk0fedcd0d0b4cc44f4ff64ef2fbbc301ee5b787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:30.980543  299416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.key ...
	I0910 18:22:30.980583  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.key: {Name:mk635bbce4b9ca6899cb8b133ecee6bfde54cc25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:30.980717  299416 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.key.210385aa
	I0910 18:22:30.980761  299416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.crt.210385aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0910 18:22:31.177200  299416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.crt.210385aa ...
	I0910 18:22:31.177236  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.crt.210385aa: {Name:mk1bfe80bfcfca859766ac63100fb75ddec43760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:31.177983  299416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.key.210385aa ...
	I0910 18:22:31.178003  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.key.210385aa: {Name:mk9fa0c557cd76459f8eac22228bd8e73f19f6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:31.178097  299416 certs.go:381] copying /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.crt.210385aa -> /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.crt
	I0910 18:22:31.178178  299416 certs.go:385] copying /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.key.210385aa -> /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.key
	I0910 18:22:31.178233  299416 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/proxy-client.key
	I0910 18:22:31.178254  299416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/proxy-client.crt with IP's: []
	I0910 18:22:32.022049  299416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/proxy-client.crt ...
	I0910 18:22:32.022083  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/proxy-client.crt: {Name:mk5d02b236131f86ce088abc42c62af2b05215d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:32.023088  299416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/proxy-client.key ...
	I0910 18:22:32.023115  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/proxy-client.key: {Name:mk6197c26e333e3842e0a6a6f6c11fa2b7406c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:32.024678  299416 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-293262/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 18:22:32.024726  299416 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-293262/.minikube/certs/ca.pem (1078 bytes)
	I0910 18:22:32.024754  299416 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-293262/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:22:32.024808  299416 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-293262/.minikube/certs/key.pem (1675 bytes)
	I0910 18:22:32.025461  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:22:32.052488  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0910 18:22:32.077568  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:22:32.102253  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 18:22:32.127231  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0910 18:22:32.152312  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:22:32.177318  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:22:32.201302  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:22:32.225554  299416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-293262/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:22:32.250316  299416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:22:32.268974  299416 ssh_runner.go:195] Run: openssl version
	I0910 18:22:32.274490  299416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:22:32.284078  299416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:22:32.287659  299416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:22:32.287725  299416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:22:32.294694  299416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:22:32.304192  299416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:22:32.307551  299416 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 18:22:32.307610  299416 kubeadm.go:392] StartCluster: {Name:addons-827965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-827965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:22:32.307702  299416 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0910 18:22:32.307775  299416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:22:32.350333  299416 cri.go:89] found id: ""
	I0910 18:22:32.350404  299416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:22:32.359258  299416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:22:32.372388  299416 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0910 18:22:32.372460  299416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:22:32.381760  299416 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:22:32.381778  299416 kubeadm.go:157] found existing configuration files:
	
	I0910 18:22:32.381835  299416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:22:32.390737  299416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:22:32.390834  299416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:22:32.399586  299416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:22:32.408676  299416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:22:32.408761  299416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:22:32.417406  299416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:22:32.426628  299416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:22:32.426697  299416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:22:32.435464  299416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:22:32.444436  299416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:22:32.444530  299416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:22:32.453067  299416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0910 18:22:32.512405  299416 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 18:22:32.512715  299416 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 18:22:32.542222  299416 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0910 18:22:32.542337  299416 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0910 18:22:32.542399  299416 kubeadm.go:310] OS: Linux
	I0910 18:22:32.542463  299416 kubeadm.go:310] CGROUPS_CPU: enabled
	I0910 18:22:32.542537  299416 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0910 18:22:32.542605  299416 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0910 18:22:32.542666  299416 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0910 18:22:32.542737  299416 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0910 18:22:32.542804  299416 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0910 18:22:32.542872  299416 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0910 18:22:32.542946  299416 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0910 18:22:32.543013  299416 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0910 18:22:32.604156  299416 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 18:22:32.604337  299416 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 18:22:32.604463  299416 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 18:22:32.610257  299416 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 18:22:32.613706  299416 out.go:235]   - Generating certificates and keys ...
	I0910 18:22:32.613816  299416 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 18:22:32.613895  299416 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 18:22:33.021560  299416 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 18:22:33.582978  299416 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 18:22:33.992290  299416 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 18:22:34.567013  299416 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 18:22:35.511529  299416 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 18:22:35.511869  299416 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-827965 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0910 18:22:36.751941  299416 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 18:22:36.752357  299416 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-827965 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0910 18:22:37.938221  299416 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 18:22:38.162969  299416 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 18:22:38.423453  299416 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 18:22:38.423773  299416 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 18:22:38.969902  299416 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 18:22:39.202596  299416 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 18:22:39.552956  299416 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 18:22:40.287869  299416 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 18:22:40.894298  299416 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 18:22:40.895008  299416 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 18:22:40.900003  299416 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 18:22:40.902465  299416 out.go:235]   - Booting up control plane ...
	I0910 18:22:40.902576  299416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 18:22:40.902659  299416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 18:22:40.902732  299416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 18:22:40.912503  299416 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 18:22:40.919495  299416 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 18:22:40.919555  299416 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 18:22:41.035296  299416 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 18:22:41.035420  299416 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 18:22:42.536933  299416 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501645683s
	I0910 18:22:42.537028  299416 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 18:22:50.540583  299416 kubeadm.go:310] [api-check] The API server is healthy after 8.002001587s
	I0910 18:22:50.558096  299416 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 18:22:50.570896  299416 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 18:22:50.595440  299416 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 18:22:50.595633  299416 kubeadm.go:310] [mark-control-plane] Marking the node addons-827965 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 18:22:50.605617  299416 kubeadm.go:310] [bootstrap-token] Using token: tlh1yp.c3h404ph8ae7ygxh
	I0910 18:22:50.607655  299416 out.go:235]   - Configuring RBAC rules ...
	I0910 18:22:50.607771  299416 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 18:22:50.612037  299416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 18:22:50.620547  299416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 18:22:50.624000  299416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 18:22:50.627911  299416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 18:22:50.631422  299416 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 18:22:50.947151  299416 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 18:22:51.379177  299416 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 18:22:51.945130  299416 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 18:22:51.946334  299416 kubeadm.go:310] 
	I0910 18:22:51.946406  299416 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 18:22:51.946417  299416 kubeadm.go:310] 
	I0910 18:22:51.946499  299416 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 18:22:51.946508  299416 kubeadm.go:310] 
	I0910 18:22:51.946533  299416 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 18:22:51.946594  299416 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 18:22:51.946648  299416 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 18:22:51.946657  299416 kubeadm.go:310] 
	I0910 18:22:51.946709  299416 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 18:22:51.946717  299416 kubeadm.go:310] 
	I0910 18:22:51.946763  299416 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 18:22:51.946772  299416 kubeadm.go:310] 
	I0910 18:22:51.946823  299416 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 18:22:51.946899  299416 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 18:22:51.946990  299416 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 18:22:51.946999  299416 kubeadm.go:310] 
	I0910 18:22:51.947080  299416 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 18:22:51.947157  299416 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 18:22:51.947166  299416 kubeadm.go:310] 
	I0910 18:22:51.947247  299416 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tlh1yp.c3h404ph8ae7ygxh \
	I0910 18:22:51.947352  299416 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac6f0834de3397ba52f09337798e82c84fd816350eb2fc30df2ddfe9ce1bbe29 \
	I0910 18:22:51.947376  299416 kubeadm.go:310] 	--control-plane 
	I0910 18:22:51.947380  299416 kubeadm.go:310] 
	I0910 18:22:51.947462  299416 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 18:22:51.947466  299416 kubeadm.go:310] 
	I0910 18:22:51.947545  299416 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tlh1yp.c3h404ph8ae7ygxh \
	I0910 18:22:51.947647  299416 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac6f0834de3397ba52f09337798e82c84fd816350eb2fc30df2ddfe9ce1bbe29 
	I0910 18:22:51.951602  299416 kubeadm.go:310] W0910 18:22:32.508865    1025 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:22:51.951900  299416 kubeadm.go:310] W0910 18:22:32.509864    1025 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 18:22:51.952115  299416 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0910 18:22:51.952219  299416 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 18:22:51.952239  299416 cni.go:84] Creating CNI manager for ""
	I0910 18:22:51.952250  299416 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0910 18:22:51.954432  299416 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0910 18:22:51.956327  299416 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0910 18:22:51.960082  299416 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0910 18:22:51.960103  299416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0910 18:22:51.980618  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0910 18:22:52.253309  299416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 18:22:52.253445  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:52.253528  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-827965 minikube.k8s.io/updated_at=2024_09_10T18_22_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=addons-827965 minikube.k8s.io/primary=true
	I0910 18:22:52.260658  299416 ops.go:34] apiserver oom_adj: -16
	I0910 18:22:52.413945  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:52.914622  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:53.414091  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:53.914642  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:54.414220  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:54.914477  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:55.414275  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:55.914752  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:56.414645  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:56.914474  299416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 18:22:57.012634  299416 kubeadm.go:1113] duration metric: took 4.759236637s to wait for elevateKubeSystemPrivileges
	I0910 18:22:57.012664  299416 kubeadm.go:394] duration metric: took 24.705057428s to StartCluster
	I0910 18:22:57.012683  299416 settings.go:142] acquiring lock: {Name:mkbaf7b7b9fd07785bd3a33aa83f50c0af67b1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:57.012955  299416 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-293262/kubeconfig
	I0910 18:22:57.013470  299416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-293262/kubeconfig: {Name:mkaebf1ab3a7e58e0c8806a841f61cbcd05c876a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:22:57.013750  299416 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0910 18:22:57.013892  299416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 18:22:57.014168  299416 config.go:182] Loaded profile config "addons-827965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0910 18:22:57.014202  299416 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0910 18:22:57.014286  299416 addons.go:69] Setting yakd=true in profile "addons-827965"
	I0910 18:22:57.014307  299416 addons.go:234] Setting addon yakd=true in "addons-827965"
	I0910 18:22:57.014332  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.014981  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.017500  299416 out.go:177] * Verifying Kubernetes components...
	I0910 18:22:57.018130  299416 addons.go:69] Setting inspektor-gadget=true in profile "addons-827965"
	I0910 18:22:57.018163  299416 addons.go:234] Setting addon inspektor-gadget=true in "addons-827965"
	I0910 18:22:57.018198  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.018233  299416 addons.go:69] Setting metrics-server=true in profile "addons-827965"
	I0910 18:22:57.018283  299416 addons.go:234] Setting addon metrics-server=true in "addons-827965"
	I0910 18:22:57.018328  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.018726  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.018900  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.024160  299416 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-827965"
	I0910 18:22:57.024874  299416 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-827965"
	I0910 18:22:57.024993  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.025525  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.032434  299416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:22:57.024179  299416 addons.go:69] Setting registry=true in profile "addons-827965"
	I0910 18:22:57.035974  299416 addons.go:234] Setting addon registry=true in "addons-827965"
	I0910 18:22:57.036048  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.036722  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.038447  299416 addons.go:69] Setting cloud-spanner=true in profile "addons-827965"
	I0910 18:22:57.038502  299416 addons.go:234] Setting addon cloud-spanner=true in "addons-827965"
	I0910 18:22:57.038546  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.038997  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.041202  299416 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-827965"
	I0910 18:22:57.041291  299416 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-827965"
	I0910 18:22:57.041326  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.041774  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.024191  299416 addons.go:69] Setting storage-provisioner=true in profile "addons-827965"
	I0910 18:22:57.042673  299416 addons.go:234] Setting addon storage-provisioner=true in "addons-827965"
	I0910 18:22:57.042715  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.043132  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.045892  299416 addons.go:69] Setting default-storageclass=true in profile "addons-827965"
	I0910 18:22:57.045928  299416 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-827965"
	I0910 18:22:57.046248  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.024205  299416 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-827965"
	I0910 18:22:57.052230  299416 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-827965"
	I0910 18:22:57.052551  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.068298  299416 addons.go:69] Setting gcp-auth=true in profile "addons-827965"
	I0910 18:22:57.068366  299416 mustload.go:65] Loading cluster: addons-827965
	I0910 18:22:57.068609  299416 config.go:182] Loaded profile config "addons-827965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0910 18:22:57.069005  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.024212  299416 addons.go:69] Setting volcano=true in profile "addons-827965"
	I0910 18:22:57.072918  299416 addons.go:234] Setting addon volcano=true in "addons-827965"
	I0910 18:22:57.072966  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.073432  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.024218  299416 addons.go:69] Setting volumesnapshots=true in profile "addons-827965"
	I0910 18:22:57.077087  299416 addons.go:234] Setting addon volumesnapshots=true in "addons-827965"
	I0910 18:22:57.077135  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.077590  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.095387  299416 addons.go:69] Setting ingress=true in profile "addons-827965"
	I0910 18:22:57.095450  299416 addons.go:234] Setting addon ingress=true in "addons-827965"
	I0910 18:22:57.095516  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.096258  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.118578  299416 addons.go:69] Setting ingress-dns=true in profile "addons-827965"
	I0910 18:22:57.118622  299416 addons.go:234] Setting addon ingress-dns=true in "addons-827965"
	I0910 18:22:57.118673  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.119132  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.178563  299416 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0910 18:22:57.187321  299416 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0910 18:22:57.187407  299416 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0910 18:22:57.187495  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.192113  299416 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0910 18:22:57.194277  299416 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 18:22:57.194299  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0910 18:22:57.194359  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.194531  299416 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0910 18:22:57.204513  299416 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0910 18:22:57.204661  299416 out.go:177]   - Using image docker.io/registry:2.8.3
	I0910 18:22:57.219246  299416 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0910 18:22:57.221848  299416 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0910 18:22:57.224063  299416 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0910 18:22:57.226059  299416 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0910 18:22:57.227924  299416 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0910 18:22:57.231691  299416 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0910 18:22:57.238165  299416 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0910 18:22:57.238244  299416 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0910 18:22:57.238255  299416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0910 18:22:57.238321  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.240552  299416 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0910 18:22:57.241367  299416 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:22:57.245222  299416 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0910 18:22:57.245361  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0910 18:22:57.245438  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.261065  299416 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0910 18:22:57.261090  299416 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0910 18:22:57.261158  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.269965  299416 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:22:57.269988  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 18:22:57.270053  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.283016  299416 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0910 18:22:57.303334  299416 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0910 18:22:57.303498  299416 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 18:22:57.303514  299416 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 18:22:57.303593  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.303791  299416 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0910 18:22:57.317307  299416 addons.go:234] Setting addon default-storageclass=true in "addons-827965"
	I0910 18:22:57.317352  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.317800  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.329507  299416 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0910 18:22:57.329530  299416 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0910 18:22:57.329604  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.347167  299416 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0910 18:22:57.347188  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0910 18:22:57.347253  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.362823  299416 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-827965"
	I0910 18:22:57.362865  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.363302  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:22:57.396684  299416 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0910 18:22:57.398479  299416 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0910 18:22:57.404413  299416 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0910 18:22:57.407077  299416 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 18:22:57.407098  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0910 18:22:57.407168  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.411770  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:22:57.415601  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.419155  299416 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0910 18:22:57.419350  299416 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 18:22:57.422617  299416 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 18:22:57.422639  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0910 18:22:57.422722  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.442961  299416 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 18:22:57.444853  299416 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0910 18:22:57.447011  299416 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 18:22:57.447032  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0910 18:22:57.447100  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.482282  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.482751  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.499113  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.522583  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.548208  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.548692  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.557741  299416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:22:57.558046  299416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 18:22:57.559242  299416 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0910 18:22:57.561168  299416 out.go:177]   - Using image docker.io/busybox:stable
	I0910 18:22:57.564377  299416 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 18:22:57.564396  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0910 18:22:57.564459  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.580956  299416 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 18:22:57.580981  299416 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 18:22:57.581048  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:22:57.589229  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.603836  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.604320  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.631594  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.647514  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	W0910 18:22:57.650932  299416 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0910 18:22:57.650963  299416 retry.go:31] will retry after 195.448515ms: ssh: handshake failed: EOF
	I0910 18:22:57.664721  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:22:57.674425  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	W0910 18:22:57.675356  299416 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0910 18:22:57.675385  299416 retry.go:31] will retry after 235.730808ms: ssh: handshake failed: EOF
	I0910 18:22:57.948426  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 18:22:57.994801  299416 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 18:22:57.994865  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0910 18:22:58.044900  299416 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0910 18:22:58.044967  299416 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0910 18:22:58.093914  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 18:22:58.186737  299416 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0910 18:22:58.186815  299416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0910 18:22:58.196425  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 18:22:58.222699  299416 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0910 18:22:58.222721  299416 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0910 18:22:58.276665  299416 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0910 18:22:58.276751  299416 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0910 18:22:58.311839  299416 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0910 18:22:58.311908  299416 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0910 18:22:58.333785  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 18:22:58.339730  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0910 18:22:58.347838  299416 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0910 18:22:58.347909  299416 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0910 18:22:58.374554  299416 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 18:22:58.374631  299416 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 18:22:58.399886  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:22:58.514131  299416 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0910 18:22:58.514207  299416 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0910 18:22:58.534299  299416 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0910 18:22:58.534365  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0910 18:22:58.583744  299416 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0910 18:22:58.583839  299416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0910 18:22:58.687134  299416 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0910 18:22:58.687213  299416 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0910 18:22:58.722083  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 18:22:58.730996  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 18:22:58.751195  299416 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:22:58.751268  299416 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 18:22:58.823048  299416 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0910 18:22:58.823116  299416 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0910 18:22:58.837666  299416 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0910 18:22:58.837734  299416 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0910 18:22:58.894317  299416 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0910 18:22:58.894397  299416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0910 18:22:58.986632  299416 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0910 18:22:58.986706  299416 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0910 18:22:59.000441  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0910 18:22:59.095564  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:22:59.144249  299416 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0910 18:22:59.144314  299416 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0910 18:22:59.332683  299416 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0910 18:22:59.332755  299416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0910 18:22:59.340890  299416 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0910 18:22:59.340956  299416 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0910 18:22:59.341335  299416 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0910 18:22:59.341377  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0910 18:22:59.513712  299416 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0910 18:22:59.513795  299416 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0910 18:22:59.679023  299416 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0910 18:22:59.679109  299416 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0910 18:22:59.849579  299416 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.291495229s)
	I0910 18:22:59.850537  299416 node_ready.go:35] waiting up to 6m0s for node "addons-827965" to be "Ready" ...
	I0910 18:22:59.849669  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.901170636s)
	I0910 18:22:59.850991  299416 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.292824446s)
	I0910 18:22:59.851010  299416 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0910 18:22:59.855371  299416 node_ready.go:49] node "addons-827965" has status "Ready":"True"
	I0910 18:22:59.855396  299416 node_ready.go:38] duration metric: took 4.794837ms for node "addons-827965" to be "Ready" ...
	I0910 18:22:59.855406  299416 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:22:59.882074  299416 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bjn88" in "kube-system" namespace to be "Ready" ...
	I0910 18:22:59.931260  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0910 18:23:00.006387  299416 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 18:23:00.006462  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0910 18:23:00.031306  299416 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0910 18:23:00.031395  299416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0910 18:23:00.109086  299416 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 18:23:00.109175  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0910 18:23:00.284559  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 18:23:00.356670  299416 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-827965" context rescaled to 1 replicas
	I0910 18:23:00.380745  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 18:23:00.585501  299416 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0910 18:23:00.585569  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0910 18:23:00.770646  299416 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0910 18:23:00.770720  299416 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0910 18:23:01.254588  299416 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0910 18:23:01.254669  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0910 18:23:01.629919  299416 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0910 18:23:01.629984  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0910 18:23:01.890162  299416 pod_ready.go:103] pod "coredns-6f6b679f8f-bjn88" in "kube-system" namespace has status "Ready":"False"
	I0910 18:23:02.090901  299416 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 18:23:02.090970  299416 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0910 18:23:02.353948  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 18:23:03.911898  299416 pod_ready.go:103] pod "coredns-6f6b679f8f-bjn88" in "kube-system" namespace has status "Ready":"False"
	I0910 18:23:04.621178  299416 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0910 18:23:04.621324  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:23:04.654596  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:23:04.949279  299416 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0910 18:23:04.997724  299416 addons.go:234] Setting addon gcp-auth=true in "addons-827965"
	I0910 18:23:04.997822  299416 host.go:66] Checking if "addons-827965" exists ...
	I0910 18:23:04.998373  299416 cli_runner.go:164] Run: docker container inspect addons-827965 --format={{.State.Status}}
	I0910 18:23:05.024909  299416 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0910 18:23:05.024968  299416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827965
	I0910 18:23:05.068087  299416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/addons-827965/id_rsa Username:docker}
	I0910 18:23:06.408626  299416 pod_ready.go:103] pod "coredns-6f6b679f8f-bjn88" in "kube-system" namespace has status "Ready":"False"
	I0910 18:23:07.245536  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.151539546s)
	I0910 18:23:07.245688  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.049200044s)
	I0910 18:23:07.245706  299416 addons.go:475] Verifying addon ingress=true in "addons-827965"
	I0910 18:23:07.245877  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.912027124s)
	I0910 18:23:07.246066  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.906267516s)
	I0910 18:23:07.246123  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.846164365s)
	I0910 18:23:07.246169  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.524020397s)
	I0910 18:23:07.246273  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.245756662s)
	I0910 18:23:07.246262  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.515190466s)
	I0910 18:23:07.246286  299416 addons.go:475] Verifying addon registry=true in "addons-827965"
	I0910 18:23:07.246366  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.150726078s)
	I0910 18:23:07.246379  299416 addons.go:475] Verifying addon metrics-server=true in "addons-827965"
	I0910 18:23:07.246426  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.315097651s)
	I0910 18:23:07.246688  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.962041299s)
	W0910 18:23:07.246730  299416 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 18:23:07.246748  299416 retry.go:31] will retry after 364.741165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 18:23:07.246813  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.865953226s)
	I0910 18:23:07.248012  299416 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-827965 service yakd-dashboard -n yakd-dashboard
	
	I0910 18:23:07.248031  299416 out.go:177] * Verifying registry addon...
	I0910 18:23:07.248041  299416 out.go:177] * Verifying ingress addon...
	I0910 18:23:07.251868  299416 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0910 18:23:07.251875  299416 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0910 18:23:07.299412  299416 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0910 18:23:07.299486  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:07.300074  299416 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0910 18:23:07.300124  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0910 18:23:07.319586  299416 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0910 18:23:07.611619  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 18:23:07.782393  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:07.789561  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:07.995005  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.640961294s)
	I0910 18:23:07.995038  299416 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-827965"
	I0910 18:23:07.995237  299416 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.970305516s)
	I0910 18:23:07.997341  299416 out.go:177] * Verifying csi-hostpath-driver addon...
	I0910 18:23:07.997343  299416 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0910 18:23:07.998973  299416 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 18:23:07.999732  299416 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0910 18:23:08.000543  299416 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0910 18:23:08.000604  299416 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0910 18:23:08.018924  299416 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0910 18:23:08.019017  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:08.101746  299416 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0910 18:23:08.101903  299416 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0910 18:23:08.125403  299416 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 18:23:08.125475  299416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0910 18:23:08.145818  299416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 18:23:08.260097  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:08.261903  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:08.506271  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:08.772959  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:08.773072  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:08.889820  299416 pod_ready.go:103] pod "coredns-6f6b679f8f-bjn88" in "kube-system" namespace has status "Ready":"False"
	I0910 18:23:09.005541  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:09.256873  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:09.257730  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:09.383003  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.23710411s)
	I0910 18:23:09.383080  299416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.771367898s)
	I0910 18:23:09.386109  299416 addons.go:475] Verifying addon gcp-auth=true in "addons-827965"
	I0910 18:23:09.391600  299416 out.go:177] * Verifying gcp-auth addon...
	I0910 18:23:09.394553  299416 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0910 18:23:09.396808  299416 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 18:23:09.504900  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:09.757898  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:09.758488  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:10.005322  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:10.257822  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:10.259967  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:10.521297  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:10.757297  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:10.758258  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:11.005354  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:11.258174  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:11.260245  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:11.388694  299416 pod_ready.go:103] pod "coredns-6f6b679f8f-bjn88" in "kube-system" namespace has status "Ready":"False"
	I0910 18:23:11.505622  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:11.757376  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:11.758559  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:12.013776  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:12.257371  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:12.258210  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:12.505438  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:12.767789  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:12.769138  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:13.005569  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:13.258781  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:13.260130  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:13.506281  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:13.762342  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:13.764584  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:13.889259  299416 pod_ready.go:103] pod "coredns-6f6b679f8f-bjn88" in "kube-system" namespace has status "Ready":"False"
	I0910 18:23:14.005442  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:14.257352  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:14.258211  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:14.508838  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:14.760091  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:14.760999  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:14.889056  299416 pod_ready.go:93] pod "coredns-6f6b679f8f-bjn88" in "kube-system" namespace has status "Ready":"True"
	I0910 18:23:14.889084  299416 pod_ready.go:82] duration metric: took 15.00693585s for pod "coredns-6f6b679f8f-bjn88" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:14.889105  299416 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-w4scb" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:14.891965  299416 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-w4scb" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-w4scb" not found
	I0910 18:23:14.891993  299416 pod_ready.go:82] duration metric: took 2.878717ms for pod "coredns-6f6b679f8f-w4scb" in "kube-system" namespace to be "Ready" ...
	E0910 18:23:14.892004  299416 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-w4scb" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-w4scb" not found
	I0910 18:23:14.892012  299416 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-827965" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:14.900624  299416 pod_ready.go:93] pod "etcd-addons-827965" in "kube-system" namespace has status "Ready":"True"
	I0910 18:23:14.900666  299416 pod_ready.go:82] duration metric: took 8.63702ms for pod "etcd-addons-827965" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:14.900681  299416 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-827965" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:14.907496  299416 pod_ready.go:93] pod "kube-apiserver-addons-827965" in "kube-system" namespace has status "Ready":"True"
	I0910 18:23:14.907533  299416 pod_ready.go:82] duration metric: took 6.844083ms for pod "kube-apiserver-addons-827965" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:14.907544  299416 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-827965" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:14.914136  299416 pod_ready.go:93] pod "kube-controller-manager-addons-827965" in "kube-system" namespace has status "Ready":"True"
	I0910 18:23:14.914163  299416 pod_ready.go:82] duration metric: took 6.610411ms for pod "kube-controller-manager-addons-827965" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:14.914176  299416 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrtn6" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:15.005839  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:15.091526  299416 pod_ready.go:93] pod "kube-proxy-nrtn6" in "kube-system" namespace has status "Ready":"True"
	I0910 18:23:15.091601  299416 pod_ready.go:82] duration metric: took 177.409682ms for pod "kube-proxy-nrtn6" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:15.091629  299416 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-827965" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:15.258290  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:15.258857  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:15.485666  299416 pod_ready.go:93] pod "kube-scheduler-addons-827965" in "kube-system" namespace has status "Ready":"True"
	I0910 18:23:15.485703  299416 pod_ready.go:82] duration metric: took 394.049393ms for pod "kube-scheduler-addons-827965" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:15.485716  299416 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-cjl9w" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:15.515253  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:15.756316  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:15.757362  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:16.010753  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:16.258373  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:16.259348  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:16.505025  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:16.758420  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:16.759733  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:17.014021  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:17.258681  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:17.259384  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:17.503604  299416 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-cjl9w" in "kube-system" namespace has status "Ready":"False"
	I0910 18:23:17.506650  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:17.759174  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:17.759744  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:18.005657  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:18.257039  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:18.258726  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:18.505561  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:18.758396  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:18.759945  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:19.006624  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:19.257431  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:19.258366  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:19.504856  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:19.756866  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:19.758105  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:19.994124  299416 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-cjl9w" in "kube-system" namespace has status "Ready":"False"
	I0910 18:23:20.007974  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:20.257807  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:20.259261  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:20.513622  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:20.757395  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:20.758852  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:21.005226  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:21.258822  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:21.260498  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:21.504338  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:21.757067  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:21.757876  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:21.992293  299416 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-cjl9w" in "kube-system" namespace has status "Ready":"True"
	I0910 18:23:21.992319  299416 pod_ready.go:82] duration metric: took 6.506594745s for pod "nvidia-device-plugin-daemonset-cjl9w" in "kube-system" namespace to be "Ready" ...
	I0910 18:23:21.992329  299416 pod_ready.go:39] duration metric: took 22.136911397s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:23:21.992347  299416 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:23:21.992419  299416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:23:22.006393  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:22.007259  299416 api_server.go:72] duration metric: took 24.993471604s to wait for apiserver process to appear ...
	I0910 18:23:22.007334  299416 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:23:22.007373  299416 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0910 18:23:22.024344  299416 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0910 18:23:22.030559  299416 api_server.go:141] control plane version: v1.31.0
	I0910 18:23:22.030597  299416 api_server.go:131] duration metric: took 23.241473ms to wait for apiserver health ...
	I0910 18:23:22.030930  299416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:23:22.045988  299416 system_pods.go:59] 18 kube-system pods found
	I0910 18:23:22.046032  299416 system_pods.go:61] "coredns-6f6b679f8f-bjn88" [594b662a-f2ef-45a8-a039-06f27308d125] Running
	I0910 18:23:22.046043  299416 system_pods.go:61] "csi-hostpath-attacher-0" [3c9510d4-8a21-4316-bbb1-7e572354faa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 18:23:22.046052  299416 system_pods.go:61] "csi-hostpath-resizer-0" [2a2cefaf-593c-481c-837e-785a4a3aa902] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 18:23:22.046084  299416 system_pods.go:61] "csi-hostpathplugin-4542j" [9d65e2b3-0bad-4cf4-bec5-7ded0c55f0b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 18:23:22.046097  299416 system_pods.go:61] "etcd-addons-827965" [1cc28570-1d7f-48a7-9760-dc5ab8dcd733] Running
	I0910 18:23:22.046102  299416 system_pods.go:61] "kindnet-c4t7q" [2841bfba-e43f-43ef-8b88-0435ea3e41d5] Running
	I0910 18:23:22.046106  299416 system_pods.go:61] "kube-apiserver-addons-827965" [aa353b05-b2fc-4125-aad8-6ccef41f62ab] Running
	I0910 18:23:22.046115  299416 system_pods.go:61] "kube-controller-manager-addons-827965" [9cf0b7df-79fd-49fe-bc84-a0f5506f0335] Running
	I0910 18:23:22.046119  299416 system_pods.go:61] "kube-ingress-dns-minikube" [3d4d6b38-95c2-40de-ab8f-823163389480] Running
	I0910 18:23:22.046126  299416 system_pods.go:61] "kube-proxy-nrtn6" [59483753-2db2-4234-a203-802954ec5861] Running
	I0910 18:23:22.046132  299416 system_pods.go:61] "kube-scheduler-addons-827965" [72918edd-a98c-4eeb-85fa-c3cfcbaed1e6] Running
	I0910 18:23:22.046138  299416 system_pods.go:61] "metrics-server-84c5f94fbc-57b72" [52cb36ea-2cee-4a6c-a843-050157ac918f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:23:22.046145  299416 system_pods.go:61] "nvidia-device-plugin-daemonset-cjl9w" [cf6a3aee-788b-4415-a203-22cbc57cda34] Running
	I0910 18:23:22.046177  299416 system_pods.go:61] "registry-66c9cd494c-7t72n" [8fb774f6-bb57-4fa6-b59f-756cfe1a578c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 18:23:22.046195  299416 system_pods.go:61] "registry-proxy-z6m52" [b1b42e77-8421-4f88-b61b-d3f2d3e35d90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 18:23:22.046209  299416 system_pods.go:61] "snapshot-controller-56fcc65765-6zrwq" [1746b7ce-4dd9-4183-b370-c9116076a8d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 18:23:22.046217  299416 system_pods.go:61] "snapshot-controller-56fcc65765-z7nt4" [020d5a48-27e2-45cc-a6c4-9810352d5986] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 18:23:22.046226  299416 system_pods.go:61] "storage-provisioner" [d0fae042-74da-4edc-85df-c318851b7cb1] Running
	I0910 18:23:22.046234  299416 system_pods.go:74] duration metric: took 15.289375ms to wait for pod list to return data ...
	I0910 18:23:22.046247  299416 default_sa.go:34] waiting for default service account to be created ...
	I0910 18:23:22.049369  299416 default_sa.go:45] found service account: "default"
	I0910 18:23:22.049397  299416 default_sa.go:55] duration metric: took 3.143175ms for default service account to be created ...
	I0910 18:23:22.049408  299416 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 18:23:22.058643  299416 system_pods.go:86] 18 kube-system pods found
	I0910 18:23:22.058681  299416 system_pods.go:89] "coredns-6f6b679f8f-bjn88" [594b662a-f2ef-45a8-a039-06f27308d125] Running
	I0910 18:23:22.058692  299416 system_pods.go:89] "csi-hostpath-attacher-0" [3c9510d4-8a21-4316-bbb1-7e572354faa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 18:23:22.058700  299416 system_pods.go:89] "csi-hostpath-resizer-0" [2a2cefaf-593c-481c-837e-785a4a3aa902] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 18:23:22.058708  299416 system_pods.go:89] "csi-hostpathplugin-4542j" [9d65e2b3-0bad-4cf4-bec5-7ded0c55f0b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 18:23:22.058713  299416 system_pods.go:89] "etcd-addons-827965" [1cc28570-1d7f-48a7-9760-dc5ab8dcd733] Running
	I0910 18:23:22.058718  299416 system_pods.go:89] "kindnet-c4t7q" [2841bfba-e43f-43ef-8b88-0435ea3e41d5] Running
	I0910 18:23:22.058722  299416 system_pods.go:89] "kube-apiserver-addons-827965" [aa353b05-b2fc-4125-aad8-6ccef41f62ab] Running
	I0910 18:23:22.058732  299416 system_pods.go:89] "kube-controller-manager-addons-827965" [9cf0b7df-79fd-49fe-bc84-a0f5506f0335] Running
	I0910 18:23:22.058737  299416 system_pods.go:89] "kube-ingress-dns-minikube" [3d4d6b38-95c2-40de-ab8f-823163389480] Running
	I0910 18:23:22.058741  299416 system_pods.go:89] "kube-proxy-nrtn6" [59483753-2db2-4234-a203-802954ec5861] Running
	I0910 18:23:22.058745  299416 system_pods.go:89] "kube-scheduler-addons-827965" [72918edd-a98c-4eeb-85fa-c3cfcbaed1e6] Running
	I0910 18:23:22.058761  299416 system_pods.go:89] "metrics-server-84c5f94fbc-57b72" [52cb36ea-2cee-4a6c-a843-050157ac918f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:23:22.058766  299416 system_pods.go:89] "nvidia-device-plugin-daemonset-cjl9w" [cf6a3aee-788b-4415-a203-22cbc57cda34] Running
	I0910 18:23:22.058775  299416 system_pods.go:89] "registry-66c9cd494c-7t72n" [8fb774f6-bb57-4fa6-b59f-756cfe1a578c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 18:23:22.058781  299416 system_pods.go:89] "registry-proxy-z6m52" [b1b42e77-8421-4f88-b61b-d3f2d3e35d90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 18:23:22.058795  299416 system_pods.go:89] "snapshot-controller-56fcc65765-6zrwq" [1746b7ce-4dd9-4183-b370-c9116076a8d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 18:23:22.058809  299416 system_pods.go:89] "snapshot-controller-56fcc65765-z7nt4" [020d5a48-27e2-45cc-a6c4-9810352d5986] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 18:23:22.058814  299416 system_pods.go:89] "storage-provisioner" [d0fae042-74da-4edc-85df-c318851b7cb1] Running
	I0910 18:23:22.058821  299416 system_pods.go:126] duration metric: took 9.407568ms to wait for k8s-apps to be running ...
	I0910 18:23:22.058839  299416 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 18:23:22.058905  299416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:23:22.072947  299416 system_svc.go:56] duration metric: took 14.098856ms WaitForService to wait for kubelet
	I0910 18:23:22.072979  299416 kubeadm.go:582] duration metric: took 25.059197673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:23:22.073001  299416 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:23:22.076265  299416 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0910 18:23:22.076311  299416 node_conditions.go:123] node cpu capacity is 2
	I0910 18:23:22.076324  299416 node_conditions.go:105] duration metric: took 3.317189ms to run NodePressure ...
	I0910 18:23:22.076336  299416 start.go:241] waiting for startup goroutines ...
	I0910 18:23:22.259146  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:22.260044  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:22.505009  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:22.758431  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:22.759533  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:23.004934  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:23.259378  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:23.261547  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:23.504655  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:23.759350  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:23.762276  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:24.005634  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:24.259257  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:24.260628  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:24.505083  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:24.757125  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:24.758082  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:25.027783  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:25.257026  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:25.257685  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:25.505279  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:25.755996  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:25.757476  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:26.015083  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:26.257362  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:26.257682  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:26.504336  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:26.760918  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:26.762092  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:27.005341  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:27.263524  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:27.264446  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:27.521055  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:27.759794  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:27.761215  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:28.007474  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:28.258697  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:28.259622  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:28.504525  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:28.767083  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:28.767464  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:29.006708  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:29.262597  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:29.264097  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:29.507739  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:29.761501  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:29.763823  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:30.017918  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:30.257587  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:30.259781  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:30.508249  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:30.758966  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:30.759385  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:31.007344  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:31.257593  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:31.259142  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:31.504716  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:31.759434  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:31.760724  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:32.006877  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:32.257162  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:32.259338  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:32.505710  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:32.757352  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:32.758471  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:33.005477  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:33.257423  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:33.258307  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:33.504989  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:33.759593  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:33.760852  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:34.005418  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:34.256685  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:34.257952  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:34.505221  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:34.756910  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:34.757944  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:35.014384  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:35.257152  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:35.257695  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:35.506194  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:35.759979  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:35.762172  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:36.007098  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:36.257916  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:36.258833  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:36.504188  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:36.758047  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:36.759508  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:37.005967  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:37.258656  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:37.260269  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:37.504699  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:37.756406  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:37.757371  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:38.005764  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:38.256540  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 18:23:38.258323  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:38.504419  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:38.758192  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:38.758703  299416 kapi.go:107] duration metric: took 31.506826794s to wait for kubernetes.io/minikube-addons=registry ...
	I0910 18:23:39.022879  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:39.257076  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:39.506229  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:39.757259  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:40.007060  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:40.257098  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:40.504983  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:40.757191  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:41.005979  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:41.256475  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:41.505180  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:41.758080  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:42.025876  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:42.257002  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:42.505799  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:42.756631  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:43.005227  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:43.257014  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:43.504756  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:43.756201  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:44.006294  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:44.260680  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:44.506335  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:44.757885  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:45.010139  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:45.293050  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:45.505276  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:45.756576  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:46.014404  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:46.256380  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:46.505025  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:46.757084  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:47.006740  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:47.256533  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:47.504303  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:47.756401  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:48.005630  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:48.256668  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:48.505124  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:48.756954  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:49.057157  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:49.257568  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:49.504759  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:49.760546  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:50.005373  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:50.257415  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:50.506458  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:50.756946  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:51.010203  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:51.257339  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:51.505127  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:51.757321  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:52.007194  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:52.257210  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:52.505427  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:52.758143  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:53.007840  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:53.256879  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:53.506509  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:53.756525  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:54.005627  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:54.256880  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:54.504976  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:54.757035  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:55.009587  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:55.257714  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:55.505139  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:55.756619  299416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 18:23:56.010895  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:56.256216  299416 kapi.go:107] duration metric: took 49.004345741s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0910 18:23:56.507180  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:57.007709  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:57.506349  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:58.013391  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:58.504898  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:59.006009  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:23:59.504223  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:24:00.049725  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:24:00.522545  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:24:01.005938  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:24:01.504878  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:24:02.011170  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:24:02.504427  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:24:03.019315  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:24:03.506135  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 18:24:04.041881  299416 kapi.go:107] duration metric: took 56.042145024s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0910 18:24:32.398879  299416 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 18:24:32.398904  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:32.898081  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:33.398182  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:33.897856  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:34.398771  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:34.898469  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:35.398308  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:35.897949  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:36.398200  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:36.897914  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:37.398858  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:37.899315  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:38.399082  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:38.897910  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:39.399268  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:39.897643  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:40.398972  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:40.898174  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:41.397793  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:41.898309  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:42.398445  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:42.897967  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:43.399145  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:43.898506  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:44.398656  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:44.898437  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:45.399329  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:45.898487  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:46.398611  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:46.897795  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:47.399031  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:47.898209  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:48.398679  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:48.898326  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:49.398596  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:49.898398  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:50.398084  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:50.898221  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:51.397839  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:51.899123  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:52.399037  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:52.898483  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:53.398195  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:53.898369  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:54.399096  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:54.898183  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:55.397948  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:55.898559  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:56.398455  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:56.898596  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:57.398146  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:57.898184  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:58.397949  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:58.899100  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:59.398658  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:24:59.898237  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:00.406399  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:00.898395  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:01.397931  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:01.898212  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:02.398845  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:02.898383  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:03.398009  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:03.899247  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:04.398227  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:04.897793  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:05.398614  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:05.898749  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:06.398918  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:06.897839  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:07.399009  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:07.899047  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:08.398668  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:08.897957  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:09.398918  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:09.899002  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:10.399042  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:10.898456  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:11.398452  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:11.898675  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:12.398681  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:12.899601  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:13.398409  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:13.899049  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:14.397722  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:14.898633  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:15.407390  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:15.898579  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:16.400898  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:16.898449  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:17.398139  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:17.897922  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:18.398545  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:18.898403  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:19.398963  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:19.897726  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:20.398151  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:20.898165  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:21.398310  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:21.897884  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:22.399326  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:22.898981  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:23.398582  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:23.898494  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:24.398364  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:24.898003  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:25.397734  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:25.898840  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:26.400269  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:26.898281  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:27.398416  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:27.898799  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:28.400481  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:28.898028  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:29.398615  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:29.898380  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:30.397794  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:30.898667  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:31.398847  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:31.898774  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:32.398185  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:32.897951  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:33.398537  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:33.898760  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:34.398886  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:34.898238  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:35.398003  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:35.898553  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:36.398069  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:36.899083  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:37.398198  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:37.898008  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:38.398293  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:38.899324  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:39.398367  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:39.898886  299416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 18:25:40.398588  299416 kapi.go:107] duration metric: took 2m31.004031506s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0910 18:25:40.400407  299416 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-827965 cluster.
	I0910 18:25:40.402274  299416 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0910 18:25:40.404143  299416 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0910 18:25:40.406163  299416 out.go:177] * Enabled addons: nvidia-device-plugin, volcano, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0910 18:25:40.408119  299416 addons.go:510] duration metric: took 2m43.393900584s for enable addons: enabled=[nvidia-device-plugin volcano cloud-spanner storage-provisioner ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0910 18:25:40.408183  299416 start.go:246] waiting for cluster config update ...
	I0910 18:25:40.408209  299416 start.go:255] writing updated cluster config ...
	I0910 18:25:40.408597  299416 ssh_runner.go:195] Run: rm -f paused
	I0910 18:25:40.751947  299416 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 18:25:40.753922  299416 out.go:177] * Done! kubectl is now configured to use "addons-827965" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	29e3310f7554c       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   8a0ef8807fc06       gadget-b2s4d
	7f8b429bc316c       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   ffe9f9fdfec8d       gcp-auth-89d5ffd79-4lffn
	3346edf2bdd1d       8b46b1cd48760       4 minutes ago       Running             admission                                0                   cdac0d3fd94d9       volcano-admission-77d7d48b68-vnlcw
	26f7c4e5b6561       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   3218353a12935       csi-hostpathplugin-4542j
	14941ec7bb872       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   3218353a12935       csi-hostpathplugin-4542j
	71c2c39d87177       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   3218353a12935       csi-hostpathplugin-4542j
	f6a6d031f11b5       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   3218353a12935       csi-hostpathplugin-4542j
	81f1d288b9d17       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   3218353a12935       csi-hostpathplugin-4542j
	40ec33c24cf0f       289a818c8d9c5       5 minutes ago       Running             controller                               0                   e8b8334a6ea5b       ingress-nginx-controller-bc57996ff-kk8jh
	26d007e8263ee       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   f46319bb56d20       csi-hostpath-attacher-0
	936bb8f99f1fa       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   64af32c29e359       volcano-controllers-56675bb4d5-kmkm9
	a04eac86bdad6       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   59533b36c6140       volcano-scheduler-576bc46687-7x7qn
	96a45dd439741       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   d16257de31353       csi-hostpath-resizer-0
	f1ed1c7b0bdc4       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   9a7a26d2d85a5       snapshot-controller-56fcc65765-z7nt4
	5ad24f13a546f       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   5f4bfb2198aa5       snapshot-controller-56fcc65765-6zrwq
	b54031ac7936e       420193b27261a       5 minutes ago       Exited              patch                                    0                   1e8ec58bfe6d9       ingress-nginx-admission-patch-4qnbd
	7bbffa222e102       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   3218353a12935       csi-hostpathplugin-4542j
	2287a7ce9fb26       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   afa5c8e9a9755       registry-proxy-z6m52
	6ee3cd6338b69       77bdba588b953       5 minutes ago       Running             yakd                                     0                   2bb8fdfb4e916       yakd-dashboard-67d98fc6b-62hjp
	15972f694a938       420193b27261a       5 minutes ago       Exited              create                                   0                   63afcaaaed182       ingress-nginx-admission-create-qvwfp
	06b27e4869924       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   d0bd4fa6c158f       metrics-server-84c5f94fbc-57b72
	ee6cb725c120a       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   e9f7bcdec87f6       local-path-provisioner-86d989889c-b4bx6
	f69e8c0f18a6a       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   a9c3fb3f6492e       registry-66c9cd494c-7t72n
	dd1d2db0bd5bf       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   5248e82a8f0de       cloud-spanner-emulator-769b77f747-lb4q4
	b6b8e6d1d58cc       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   5c40e3a3d0397       nvidia-device-plugin-daemonset-cjl9w
	09dcb88574ee8       2437cf7621777       5 minutes ago       Running             coredns                                  0                   23ab9d1001cbc       coredns-6f6b679f8f-bjn88
	a6b29df2818d8       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   b2ddf2ceb48f2       kube-ingress-dns-minikube
	a832fed128df0       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   e91aa5dea26e7       storage-provisioner
	c3cf0e6bff461       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   0ef75dd572b82       kindnet-c4t7q
	cf0193a7cab5f       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   31de72fb38311       kube-proxy-nrtn6
	baac9a2acec95       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   9d46a5b66c075       kube-scheduler-addons-827965
	a291cd7f20381       27e3830e14027       6 minutes ago       Running             etcd                                     0                   a540bc020523b       etcd-addons-827965
	072fd68238f64       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   d436f8b2a99f9       kube-apiserver-addons-827965
	c17982cd96cf6       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   771a19251b846       kube-controller-manager-addons-827965
	
	
	==> containerd <==
	Sep 10 18:26:43 addons-827965 containerd[815]: time="2024-09-10T18:26:43.390131118Z" level=info msg="CreateContainer within sandbox \"8a0ef8807fc0672094b7bbb3827f906af19997b9c5f9c13b7e6045cb5e7bafe8\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 10 18:26:43 addons-827965 containerd[815]: time="2024-09-10T18:26:43.412539068Z" level=info msg="CreateContainer within sandbox \"8a0ef8807fc0672094b7bbb3827f906af19997b9c5f9c13b7e6045cb5e7bafe8\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d\""
	Sep 10 18:26:43 addons-827965 containerd[815]: time="2024-09-10T18:26:43.413285059Z" level=info msg="StartContainer for \"29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d\""
	Sep 10 18:26:43 addons-827965 containerd[815]: time="2024-09-10T18:26:43.469449730Z" level=info msg="StartContainer for \"29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d\" returns successfully"
	Sep 10 18:26:44 addons-827965 containerd[815]: time="2024-09-10T18:26:44.864717084Z" level=error msg="ExecSync for \"29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d\" failed" error="failed to exec in container: failed to start exec \"457b00553ee6f9cd4705bc920f818cc1de6f50c0dbe3352ee49f3702e2a18bf3\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 10 18:26:44 addons-827965 containerd[815]: time="2024-09-10T18:26:44.876284250Z" level=error msg="ExecSync for \"29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d\" failed" error="failed to exec in container: failed to start exec \"4e397b77f795f820cd30df094019dd3e5620ffc14d2f76ecc2edf9751439641c\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 10 18:26:44 addons-827965 containerd[815]: time="2024-09-10T18:26:44.886041191Z" level=error msg="ExecSync for \"29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d\" failed" error="failed to exec in container: failed to start exec \"fd9c0b6f34cbf60b938dc9fd1af1c139a74fba70ea584ee36b51b377933fec7d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 10 18:26:44 addons-827965 containerd[815]: time="2024-09-10T18:26:44.897166955Z" level=error msg="ExecSync for \"29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d\" failed" error="failed to exec in container: failed to start exec \"52a2a903d6e8e9065518dbe020ed61413157fd41632c0ef29956a0bb803f6ece\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 10 18:26:44 addons-827965 containerd[815]: time="2024-09-10T18:26:44.908331472Z" level=error msg="ExecSync for \"29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d\" failed" error="failed to exec in container: failed to start exec \"fd9019d393e2e0239f48ae523836356f2d02b8803ee26481474015e54514664c\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 10 18:26:44 addons-827965 containerd[815]: time="2024-09-10T18:26:44.929698991Z" level=error msg="ExecSync for \"29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d\" failed" error="failed to exec in container: failed to start exec \"1d85329c104859e630c4b6babccf38cc0fcc0e5e497996e328221a0636f4b34d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 10 18:26:45 addons-827965 containerd[815]: time="2024-09-10T18:26:45.186050904Z" level=info msg="shim disconnected" id=29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d namespace=k8s.io
	Sep 10 18:26:45 addons-827965 containerd[815]: time="2024-09-10T18:26:45.186127958Z" level=warning msg="cleaning up after shim disconnected" id=29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d namespace=k8s.io
	Sep 10 18:26:45 addons-827965 containerd[815]: time="2024-09-10T18:26:45.186144894Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 10 18:26:45 addons-827965 containerd[815]: time="2024-09-10T18:26:45.497144363Z" level=info msg="RemoveContainer for \"e04aa7b3735ea85c5de3ca1e9dfbdab7a9d634914d6f9bb87658db0f5f5186ea\""
	Sep 10 18:26:45 addons-827965 containerd[815]: time="2024-09-10T18:26:45.505726622Z" level=info msg="RemoveContainer for \"e04aa7b3735ea85c5de3ca1e9dfbdab7a9d634914d6f9bb87658db0f5f5186ea\" returns successfully"
	Sep 10 18:26:51 addons-827965 containerd[815]: time="2024-09-10T18:26:51.371830781Z" level=info msg="RemoveContainer for \"fa2c0c125a9a44b2cdea8a1cb4dd48e48175bb5e1da401c1fc7f5492c8b29b99\""
	Sep 10 18:26:51 addons-827965 containerd[815]: time="2024-09-10T18:26:51.378544280Z" level=info msg="RemoveContainer for \"fa2c0c125a9a44b2cdea8a1cb4dd48e48175bb5e1da401c1fc7f5492c8b29b99\" returns successfully"
	Sep 10 18:26:51 addons-827965 containerd[815]: time="2024-09-10T18:26:51.381923605Z" level=info msg="StopPodSandbox for \"024925c824c721cc5a64cfb3d02bd6ed52a3555459a5474eb6d9ade6bedd3acc\""
	Sep 10 18:26:51 addons-827965 containerd[815]: time="2024-09-10T18:26:51.389419050Z" level=info msg="TearDown network for sandbox \"024925c824c721cc5a64cfb3d02bd6ed52a3555459a5474eb6d9ade6bedd3acc\" successfully"
	Sep 10 18:26:51 addons-827965 containerd[815]: time="2024-09-10T18:26:51.389587509Z" level=info msg="StopPodSandbox for \"024925c824c721cc5a64cfb3d02bd6ed52a3555459a5474eb6d9ade6bedd3acc\" returns successfully"
	Sep 10 18:26:51 addons-827965 containerd[815]: time="2024-09-10T18:26:51.390218636Z" level=info msg="RemovePodSandbox for \"024925c824c721cc5a64cfb3d02bd6ed52a3555459a5474eb6d9ade6bedd3acc\""
	Sep 10 18:26:51 addons-827965 containerd[815]: time="2024-09-10T18:26:51.390271280Z" level=info msg="Forcibly stopping sandbox \"024925c824c721cc5a64cfb3d02bd6ed52a3555459a5474eb6d9ade6bedd3acc\""
	Sep 10 18:26:51 addons-827965 containerd[815]: time="2024-09-10T18:26:51.399257008Z" level=info msg="TearDown network for sandbox \"024925c824c721cc5a64cfb3d02bd6ed52a3555459a5474eb6d9ade6bedd3acc\" successfully"
	Sep 10 18:26:51 addons-827965 containerd[815]: time="2024-09-10T18:26:51.405686192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"024925c824c721cc5a64cfb3d02bd6ed52a3555459a5474eb6d9ade6bedd3acc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 10 18:26:51 addons-827965 containerd[815]: time="2024-09-10T18:26:51.405997912Z" level=info msg="RemovePodSandbox \"024925c824c721cc5a64cfb3d02bd6ed52a3555459a5474eb6d9ade6bedd3acc\" returns successfully"
	
	
	==> coredns [09dcb88574ee891e6db9b92ecc6e85ebc730d7cd4f68581e0651d55dcb465cec] <==
	[INFO] 10.244.0.7:52496 - 1405 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000215646s
	[INFO] 10.244.0.7:45707 - 7025 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002207769s
	[INFO] 10.244.0.7:45707 - 40812 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002263613s
	[INFO] 10.244.0.7:47085 - 10847 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000143524s
	[INFO] 10.244.0.7:47085 - 29273 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071811s
	[INFO] 10.244.0.7:40531 - 24606 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000127434s
	[INFO] 10.244.0.7:40531 - 3345 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000073469s
	[INFO] 10.244.0.7:40695 - 3517 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062613s
	[INFO] 10.244.0.7:40695 - 47546 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000071417s
	[INFO] 10.244.0.7:35327 - 29786 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051996s
	[INFO] 10.244.0.7:35327 - 26968 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083914s
	[INFO] 10.244.0.7:49776 - 30823 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001414099s
	[INFO] 10.244.0.7:49776 - 12137 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001010047s
	[INFO] 10.244.0.7:42412 - 35900 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000053046s
	[INFO] 10.244.0.7:42412 - 53050 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124889s
	[INFO] 10.244.0.24:46217 - 39871 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001683546s
	[INFO] 10.244.0.24:51975 - 47365 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002727997s
	[INFO] 10.244.0.24:36713 - 3656 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110252s
	[INFO] 10.244.0.24:48110 - 40034 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001658s
	[INFO] 10.244.0.24:43835 - 45322 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100323s
	[INFO] 10.244.0.24:40347 - 48540 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00011867s
	[INFO] 10.244.0.24:53882 - 18602 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002674697s
	[INFO] 10.244.0.24:38113 - 63146 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002676936s
	[INFO] 10.244.0.24:38836 - 52999 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001689487s
	[INFO] 10.244.0.24:39683 - 5495 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001785757s
	
	
	==> describe nodes <==
	Name:               addons-827965
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-827965
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=addons-827965
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_22_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-827965
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-827965"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:22:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-827965
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:28:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:25:54 +0000   Tue, 10 Sep 2024 18:22:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:25:54 +0000   Tue, 10 Sep 2024 18:22:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:25:54 +0000   Tue, 10 Sep 2024 18:22:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:25:54 +0000   Tue, 10 Sep 2024 18:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-827965
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfb7272f00454787a434848e7fb00326
	  System UUID:                0b3c83f9-4a36-4cd8-b12b-eef3a1fe4ab5
	  Boot ID:                    7bf0ec17-776b-49a4-8976-7c252db50227
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.21
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-lb4q4     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  gadget                      gadget-b2s4d                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-4lffn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-kk8jh    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m54s
	  kube-system                 coredns-6f6b679f8f-bjn88                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-4542j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-827965                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m9s
	  kube-system                 kindnet-c4t7q                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-827965                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-827965       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-proxy-nrtn6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-827965                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-84c5f94fbc-57b72             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-cjl9w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-7t72n                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 registry-proxy-z6m52                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 snapshot-controller-56fcc65765-6zrwq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-z7nt4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  local-path-storage          local-path-provisioner-86d989889c-b4bx6     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  volcano-system              volcano-admission-77d7d48b68-vnlcw          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-kmkm9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-7x7qn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-62hjp              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node addons-827965 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m17s (x7 over 6m17s)  kubelet          Node addons-827965 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m17s (x7 over 6m17s)  kubelet          Node addons-827965 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s                   kubelet          Node addons-827965 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-827965 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s                   kubelet          Node addons-827965 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s                   node-controller  Node addons-827965 event: Registered Node addons-827965 in Controller
	
	
	==> dmesg <==
	[Sep10 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016632] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.448339] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.720885] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.671385] kauditd_printk_skb: 36 callbacks suppressed
	[Sep10 17:23] FS-Cache: Duplicate cookie detected
	[  +0.000752] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001089] FS-Cache: O-cookie d=000000009363ae61{9P.session} n=00000000de87a977
	[  +0.001185] FS-Cache: O-key=[10] '34323935383836303636'
	[  +0.000825] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000998] FS-Cache: N-cookie d=000000009363ae61{9P.session} n=000000001d6c2bf4
	[  +0.001208] FS-Cache: N-key=[10] '34323935383836303636'
	[Sep10 17:28] hrtimer: interrupt took 15131239 ns
	[Sep10 17:50] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [a291cd7f203819733af5b95e80a04b9c7f5c75cc164cb269c6baf9074f098a00] <==
	{"level":"info","ts":"2024-09-10T18:22:43.376947Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T18:22:43.377483Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T18:22:43.377272Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-10T18:22:43.378702Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-10T18:22:43.378569Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T18:22:44.332841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-10T18:22:44.333059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-10T18:22:44.333213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-10T18:22:44.333354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-10T18:22:44.333449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-10T18:22:44.333546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-10T18:22:44.333639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-10T18:22:44.336384Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:22:44.337559Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-827965 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:22:44.337676Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:22:44.338174Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:22:44.338597Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:22:44.338746Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:22:44.338853Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:22:44.339695Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:22:44.345361Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-10T18:22:44.346476Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:22:44.350618Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T18:22:44.351810Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:22:44.351910Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [7f8b429bc316c7d5e445f97cab14ebed788ef887084340b0d6105515a105511c] <==
	2024/09/10 18:25:39 GCP Auth Webhook started!
	2024/09/10 18:25:57 Ready to marshal response ...
	2024/09/10 18:25:57 Ready to write response ...
	2024/09/10 18:25:58 Ready to marshal response ...
	2024/09/10 18:25:58 Ready to write response ...
	
	
	==> kernel <==
	 18:28:59 up  2:11,  0 users,  load average: 0.33, 1.42, 2.45
	Linux addons-827965 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [c3cf0e6bff461f0f7829ccead4b987706dbf95433921f16d805d53b80cd47f6a] <==
	I0910 18:26:50.694781       1 main.go:299] handling current node
	I0910 18:27:00.693807       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:27:00.693839       1 main.go:299] handling current node
	I0910 18:27:10.701158       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:27:10.701195       1 main.go:299] handling current node
	I0910 18:27:20.698984       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:27:20.699018       1 main.go:299] handling current node
	I0910 18:27:30.697655       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:27:30.697693       1 main.go:299] handling current node
	I0910 18:27:40.693739       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:27:40.694729       1 main.go:299] handling current node
	I0910 18:27:50.694907       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:27:50.694941       1 main.go:299] handling current node
	I0910 18:28:00.693934       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:28:00.693966       1 main.go:299] handling current node
	I0910 18:28:10.698325       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:28:10.698360       1 main.go:299] handling current node
	I0910 18:28:20.703185       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:28:20.703436       1 main.go:299] handling current node
	I0910 18:28:30.700872       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:28:30.700908       1 main.go:299] handling current node
	I0910 18:28:40.696638       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:28:40.696672       1 main.go:299] handling current node
	I0910 18:28:50.700893       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0910 18:28:50.700928       1 main.go:299] handling current node
	
	
	==> kube-apiserver [072fd68238f645dd18058ab1e5d8a5db591c71f941bea6c1e532f4502cbeed8e] <==
	W0910 18:24:06.335689       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:07.340306       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:08.381625       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:09.444317       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:10.500319       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:11.599564       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:12.170672       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.246.53:443: connect: connection refused
	E0910 18:24:12.170711       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.246.53:443: connect: connection refused" logger="UnhandledError"
	W0910 18:24:12.172516       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:12.223742       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.246.53:443: connect: connection refused
	E0910 18:24:12.223778       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.246.53:443: connect: connection refused" logger="UnhandledError"
	W0910 18:24:12.225444       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:12.655119       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:13.750928       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:14.757382       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:15.801628       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:16.847846       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.37.14:443: connect: connection refused
	W0910 18:24:32.158371       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.246.53:443: connect: connection refused
	E0910 18:24:32.158411       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.246.53:443: connect: connection refused" logger="UnhandledError"
	W0910 18:25:12.182145       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.246.53:443: connect: connection refused
	E0910 18:25:12.182189       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.246.53:443: connect: connection refused" logger="UnhandledError"
	W0910 18:25:12.232256       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.246.53:443: connect: connection refused
	E0910 18:25:12.232307       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.246.53:443: connect: connection refused" logger="UnhandledError"
	I0910 18:25:57.333594       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0910 18:25:57.370956       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [c17982cd96cf63abd423c19144c240dfee253efc845260dc9c2aec21b61ab249] <==
	I0910 18:25:12.208346       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0910 18:25:12.208532       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0910 18:25:12.225553       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0910 18:25:12.242334       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0910 18:25:12.256319       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0910 18:25:12.256498       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0910 18:25:12.268770       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0910 18:25:13.240688       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0910 18:25:13.252951       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0910 18:25:14.365541       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0910 18:25:14.386246       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0910 18:25:15.372060       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0910 18:25:15.379581       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0910 18:25:15.390726       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0910 18:25:15.398819       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0910 18:25:15.413280       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0910 18:25:15.419376       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0910 18:25:40.338223       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.087447ms"
	I0910 18:25:40.338434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="71.523µs"
	I0910 18:25:45.088679       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0910 18:25:45.093205       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0910 18:25:45.194292       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0910 18:25:45.194748       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0910 18:25:54.597143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-827965"
	I0910 18:25:57.016697       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [cf0193a7cab5f78844e2776c88a344098adea88b1467578fecee3949c17f711c] <==
	I0910 18:22:58.152867       1 server_linux.go:66] "Using iptables proxy"
	I0910 18:22:58.270297       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0910 18:22:58.270364       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:22:58.310442       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0910 18:22:58.310509       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:22:58.314379       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:22:58.317120       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:22:58.317143       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:22:58.321021       1 config.go:197] "Starting service config controller"
	I0910 18:22:58.321054       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:22:58.321082       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:22:58.321087       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:22:58.321704       1 config.go:326] "Starting node config controller"
	I0910 18:22:58.321722       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:22:58.421432       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:22:58.421497       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:22:58.421797       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [baac9a2acec95c84458ff384924631579252810601ea21eed26c7e3eafa87e6c] <==
	W0910 18:22:48.897892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 18:22:48.900994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:48.898274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 18:22:48.901091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:49.724210       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 18:22:49.724319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:49.824759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 18:22:49.824928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:49.912025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 18:22:49.912288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:49.966260       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 18:22:49.966621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:49.977779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 18:22:49.978042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:49.989529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 18:22:49.989573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:50.061329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 18:22:50.064875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:50.113173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 18:22:50.113410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:50.123379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 18:22:50.123426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:22:50.154623       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 18:22:50.154866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 18:22:50.474110       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 18:26:51 addons-827965 kubelet[1499]: I0910 18:26:51.370399    1499 scope.go:117] "RemoveContainer" containerID="fa2c0c125a9a44b2cdea8a1cb4dd48e48175bb5e1da401c1fc7f5492c8b29b99"
	Sep 10 18:27:03 addons-827965 kubelet[1499]: I0910 18:27:03.257785    1499 scope.go:117] "RemoveContainer" containerID="29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d"
	Sep 10 18:27:03 addons-827965 kubelet[1499]: E0910 18:27:03.258173    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b2s4d_gadget(42586159-83ee-4fc6-8f18-fdeb15adfcde)\"" pod="gadget/gadget-b2s4d" podUID="42586159-83ee-4fc6-8f18-fdeb15adfcde"
	Sep 10 18:27:12 addons-827965 kubelet[1499]: I0910 18:27:12.257694    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-cjl9w" secret="" err="secret \"gcp-auth\" not found"
	Sep 10 18:27:13 addons-827965 kubelet[1499]: I0910 18:27:13.258192    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-7t72n" secret="" err="secret \"gcp-auth\" not found"
	Sep 10 18:27:18 addons-827965 kubelet[1499]: I0910 18:27:18.257787    1499 scope.go:117] "RemoveContainer" containerID="29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d"
	Sep 10 18:27:18 addons-827965 kubelet[1499]: E0910 18:27:18.257991    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b2s4d_gadget(42586159-83ee-4fc6-8f18-fdeb15adfcde)\"" pod="gadget/gadget-b2s4d" podUID="42586159-83ee-4fc6-8f18-fdeb15adfcde"
	Sep 10 18:27:29 addons-827965 kubelet[1499]: I0910 18:27:29.258108    1499 scope.go:117] "RemoveContainer" containerID="29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d"
	Sep 10 18:27:29 addons-827965 kubelet[1499]: E0910 18:27:29.258309    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b2s4d_gadget(42586159-83ee-4fc6-8f18-fdeb15adfcde)\"" pod="gadget/gadget-b2s4d" podUID="42586159-83ee-4fc6-8f18-fdeb15adfcde"
	Sep 10 18:27:41 addons-827965 kubelet[1499]: I0910 18:27:41.259121    1499 scope.go:117] "RemoveContainer" containerID="29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d"
	Sep 10 18:27:41 addons-827965 kubelet[1499]: E0910 18:27:41.259375    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b2s4d_gadget(42586159-83ee-4fc6-8f18-fdeb15adfcde)\"" pod="gadget/gadget-b2s4d" podUID="42586159-83ee-4fc6-8f18-fdeb15adfcde"
	Sep 10 18:27:41 addons-827965 kubelet[1499]: I0910 18:27:41.260454    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-z6m52" secret="" err="secret \"gcp-auth\" not found"
	Sep 10 18:27:52 addons-827965 kubelet[1499]: I0910 18:27:52.258211    1499 scope.go:117] "RemoveContainer" containerID="29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d"
	Sep 10 18:27:52 addons-827965 kubelet[1499]: E0910 18:27:52.258434    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b2s4d_gadget(42586159-83ee-4fc6-8f18-fdeb15adfcde)\"" pod="gadget/gadget-b2s4d" podUID="42586159-83ee-4fc6-8f18-fdeb15adfcde"
	Sep 10 18:28:04 addons-827965 kubelet[1499]: I0910 18:28:04.257945    1499 scope.go:117] "RemoveContainer" containerID="29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d"
	Sep 10 18:28:04 addons-827965 kubelet[1499]: E0910 18:28:04.258178    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b2s4d_gadget(42586159-83ee-4fc6-8f18-fdeb15adfcde)\"" pod="gadget/gadget-b2s4d" podUID="42586159-83ee-4fc6-8f18-fdeb15adfcde"
	Sep 10 18:28:17 addons-827965 kubelet[1499]: I0910 18:28:17.258335    1499 scope.go:117] "RemoveContainer" containerID="29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d"
	Sep 10 18:28:17 addons-827965 kubelet[1499]: E0910 18:28:17.258551    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b2s4d_gadget(42586159-83ee-4fc6-8f18-fdeb15adfcde)\"" pod="gadget/gadget-b2s4d" podUID="42586159-83ee-4fc6-8f18-fdeb15adfcde"
	Sep 10 18:28:21 addons-827965 kubelet[1499]: I0910 18:28:21.259245    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-cjl9w" secret="" err="secret \"gcp-auth\" not found"
	Sep 10 18:28:32 addons-827965 kubelet[1499]: I0910 18:28:32.258164    1499 scope.go:117] "RemoveContainer" containerID="29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d"
	Sep 10 18:28:32 addons-827965 kubelet[1499]: E0910 18:28:32.258389    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b2s4d_gadget(42586159-83ee-4fc6-8f18-fdeb15adfcde)\"" pod="gadget/gadget-b2s4d" podUID="42586159-83ee-4fc6-8f18-fdeb15adfcde"
	Sep 10 18:28:43 addons-827965 kubelet[1499]: I0910 18:28:43.257349    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-7t72n" secret="" err="secret \"gcp-auth\" not found"
	Sep 10 18:28:44 addons-827965 kubelet[1499]: I0910 18:28:44.257932    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-z6m52" secret="" err="secret \"gcp-auth\" not found"
	Sep 10 18:28:45 addons-827965 kubelet[1499]: I0910 18:28:45.257850    1499 scope.go:117] "RemoveContainer" containerID="29e3310f7554cc1026e2e028e1999467827eead4267143ac054b1d22ccc5fe2d"
	Sep 10 18:28:45 addons-827965 kubelet[1499]: E0910 18:28:45.258306    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b2s4d_gadget(42586159-83ee-4fc6-8f18-fdeb15adfcde)\"" pod="gadget/gadget-b2s4d" podUID="42586159-83ee-4fc6-8f18-fdeb15adfcde"
	
	
	==> storage-provisioner [a832fed128df0e7f5080d3db93c753f2977500f0c4125eabbeb5193f80d39b9f] <==
	I0910 18:23:03.201073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 18:23:03.221478       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 18:23:03.226941       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 18:23:03.239747       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 18:23:03.239993       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-827965_a97348ad-fe76-486f-94ba-9b9890d4f883!
	I0910 18:23:03.249327       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"801a2741-a9b2-4c5a-8eb1-90ee32368a9f", APIVersion:"v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-827965_a97348ad-fe76-486f-94ba-9b9890d4f883 became leader
	I0910 18:23:03.342211       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-827965_a97348ad-fe76-486f-94ba-9b9890d4f883!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-827965 -n addons-827965
helpers_test.go:261: (dbg) Run:  kubectl --context addons-827965 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-qvwfp ingress-nginx-admission-patch-4qnbd test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-827965 describe pod ingress-nginx-admission-create-qvwfp ingress-nginx-admission-patch-4qnbd test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-827965 describe pod ingress-nginx-admission-create-qvwfp ingress-nginx-admission-patch-4qnbd test-job-nginx-0: exit status 1 (88.423033ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qvwfp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4qnbd" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-827965 describe pod ingress-nginx-admission-create-qvwfp ingress-nginx-admission-patch-4qnbd test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.25s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.57
4 TestDownloadOnly/v1.20.0/preload-exists 0.01
8 TestDownloadOnly/v1.20.0/LogsDuration 0.22
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 5.9
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.9
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 220.15
31 TestAddons/serial/GCPAuth/Namespaces 0.19
33 TestAddons/parallel/Registry 15.17
34 TestAddons/parallel/Ingress 20.26
35 TestAddons/parallel/InspektorGadget 11.89
36 TestAddons/parallel/MetricsServer 5.98
39 TestAddons/parallel/CSI 54.06
40 TestAddons/parallel/Headlamp 16.76
41 TestAddons/parallel/CloudSpanner 6.61
42 TestAddons/parallel/LocalPath 52.81
43 TestAddons/parallel/NvidiaDevicePlugin 6.54
44 TestAddons/parallel/Yakd 10.83
45 TestAddons/StoppedEnableDisable 12.28
46 TestCertOptions 39.72
47 TestCertExpiration 229.73
49 TestForceSystemdFlag 50.2
50 TestForceSystemdEnv 43.5
51 TestDockerEnvContainerd 49.44
56 TestErrorSpam/setup 32.51
57 TestErrorSpam/start 0.74
58 TestErrorSpam/status 1.08
59 TestErrorSpam/pause 1.83
60 TestErrorSpam/unpause 1.79
61 TestErrorSpam/stop 1.48
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 67.19
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.58
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.04
73 TestFunctional/serial/CacheCmd/cache/add_local 1.21
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.01
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 45.69
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.76
84 TestFunctional/serial/LogsFileCmd 1.77
85 TestFunctional/serial/InvalidService 4.58
87 TestFunctional/parallel/ConfigCmd 0.48
88 TestFunctional/parallel/DashboardCmd 9.54
89 TestFunctional/parallel/DryRun 0.58
90 TestFunctional/parallel/InternationalLanguage 0.29
91 TestFunctional/parallel/StatusCmd 1.05
95 TestFunctional/parallel/ServiceCmdConnect 7.57
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 23.83
99 TestFunctional/parallel/SSHCmd 0.54
100 TestFunctional/parallel/CpCmd 2.02
102 TestFunctional/parallel/FileSync 0.33
103 TestFunctional/parallel/CertSync 2.06
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
111 TestFunctional/parallel/License 0.33
112 TestFunctional/parallel/Version/short 0.12
113 TestFunctional/parallel/Version/components 1.34
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.81
119 TestFunctional/parallel/ImageCommands/Setup 0.72
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.38
125 TestFunctional/parallel/ServiceCmd/DeployApp 9.27
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
136 TestFunctional/parallel/ServiceCmd/List 0.33
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
139 TestFunctional/parallel/ServiceCmd/Format 0.41
140 TestFunctional/parallel/ServiceCmd/URL 0.38
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
148 TestFunctional/parallel/ProfileCmd/profile_list 0.46
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
150 TestFunctional/parallel/MountCmd/any-port 7.01
151 TestFunctional/parallel/MountCmd/specific-port 1.86
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 115.3
160 TestMultiControlPlane/serial/DeployApp 43.24
161 TestMultiControlPlane/serial/PingHostFromPods 1.74
162 TestMultiControlPlane/serial/AddWorkerNode 24.58
163 TestMultiControlPlane/serial/NodeLabels 0.13
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
165 TestMultiControlPlane/serial/CopyFile 19.2
166 TestMultiControlPlane/serial/StopSecondaryNode 12.89
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.77
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.76
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 140.5
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.85
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
173 TestMultiControlPlane/serial/StopCluster 36.52
174 TestMultiControlPlane/serial/RestartCluster 81.34
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
176 TestMultiControlPlane/serial/AddSecondaryNode 46.45
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
181 TestJSONOutput/start/Command 59.61
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.87
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.66
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.78
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 39.65
207 TestKicCustomNetwork/use_default_bridge_network 37.66
208 TestKicExistingNetwork 33.47
209 TestKicCustomSubnet 32.11
210 TestKicStaticIP 34.57
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 73.49
215 TestMountStart/serial/StartWithMountFirst 6.34
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 6.96
218 TestMountStart/serial/VerifyMountSecond 0.25
219 TestMountStart/serial/DeleteFirst 1.63
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.64
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 77.83
227 TestMultiNode/serial/DeployApp2Nodes 17.03
228 TestMultiNode/serial/PingHostFrom2Pods 1.02
229 TestMultiNode/serial/AddNode 20.48
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.33
232 TestMultiNode/serial/CopyFile 10
233 TestMultiNode/serial/StopNode 2.21
234 TestMultiNode/serial/StartAfterStop 9.92
235 TestMultiNode/serial/RestartKeepsNodes 99.29
236 TestMultiNode/serial/DeleteNode 5.62
237 TestMultiNode/serial/StopMultiNode 24.05
238 TestMultiNode/serial/RestartMultiNode 53.1
239 TestMultiNode/serial/ValidateNameConflict 33.27
244 TestPreload 125.96
246 TestScheduledStopUnix 106.98
249 TestInsufficientStorage 11.33
250 TestRunningBinaryUpgrade 82.95
252 TestKubernetesUpgrade 349.98
253 TestMissingContainerUpgrade 181.09
255 TestPause/serial/Start 69.68
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 42.61
259 TestNoKubernetes/serial/StartWithStopK8s 18.01
260 TestNoKubernetes/serial/Start 8.77
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
262 TestPause/serial/SecondStartNoReconfiguration 7.65
263 TestNoKubernetes/serial/ProfileList 1.25
264 TestNoKubernetes/serial/Stop 1.27
265 TestNoKubernetes/serial/StartNoArgs 7.37
266 TestPause/serial/Pause 1.01
267 TestPause/serial/VerifyStatus 0.39
268 TestPause/serial/Unpause 0.72
269 TestPause/serial/PauseAgain 1.06
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
271 TestPause/serial/DeletePaused 3.02
275 TestPause/serial/VerifyDeletedResources 0.18
280 TestNetworkPlugins/group/false 5.8
284 TestStoppedBinaryUpgrade/Setup 0.99
285 TestStoppedBinaryUpgrade/Upgrade 116.57
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
294 TestNetworkPlugins/group/auto/Start 75.51
295 TestNetworkPlugins/group/kindnet/Start 51.34
296 TestNetworkPlugins/group/auto/KubeletFlags 0.31
297 TestNetworkPlugins/group/auto/NetCatPod 10.38
298 TestNetworkPlugins/group/auto/DNS 0.32
299 TestNetworkPlugins/group/auto/Localhost 0.16
300 TestNetworkPlugins/group/auto/HairPin 0.17
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.39
304 TestNetworkPlugins/group/kindnet/DNS 0.28
305 TestNetworkPlugins/group/kindnet/Localhost 0.31
306 TestNetworkPlugins/group/kindnet/HairPin 0.17
307 TestNetworkPlugins/group/calico/Start 77.35
308 TestNetworkPlugins/group/custom-flannel/Start 60.98
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.32
311 TestNetworkPlugins/group/calico/NetCatPod 9.3
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.33
314 TestNetworkPlugins/group/calico/DNS 0.21
315 TestNetworkPlugins/group/calico/Localhost 0.25
316 TestNetworkPlugins/group/calico/HairPin 0.26
317 TestNetworkPlugins/group/custom-flannel/DNS 0.22
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
320 TestNetworkPlugins/group/enable-default-cni/Start 47.76
321 TestNetworkPlugins/group/flannel/Start 55.3
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.44
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.3
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
329 TestNetworkPlugins/group/flannel/NetCatPod 10.35
330 TestNetworkPlugins/group/flannel/DNS 0.27
331 TestNetworkPlugins/group/flannel/Localhost 0.24
332 TestNetworkPlugins/group/flannel/HairPin 0.2
333 TestNetworkPlugins/group/bridge/Start 53.65
335 TestStartStop/group/old-k8s-version/serial/FirstStart 152.14
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
337 TestNetworkPlugins/group/bridge/NetCatPod 11.36
338 TestNetworkPlugins/group/bridge/DNS 0.26
339 TestNetworkPlugins/group/bridge/Localhost 0.25
340 TestNetworkPlugins/group/bridge/HairPin 0.27
342 TestStartStop/group/no-preload/serial/FirstStart 71.07
343 TestStartStop/group/no-preload/serial/DeployApp 9.4
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.27
345 TestStartStop/group/no-preload/serial/Stop 12.19
346 TestStartStop/group/old-k8s-version/serial/DeployApp 8.56
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
348 TestStartStop/group/no-preload/serial/SecondStart 268.27
349 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.04
350 TestStartStop/group/old-k8s-version/serial/Stop 12.47
351 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.31
352 TestStartStop/group/old-k8s-version/serial/SecondStart 151.41
353 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
354 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
355 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
356 TestStartStop/group/old-k8s-version/serial/Pause 3.08
358 TestStartStop/group/embed-certs/serial/FirstStart 65.91
359 TestStartStop/group/embed-certs/serial/DeployApp 8.37
360 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.52
361 TestStartStop/group/embed-certs/serial/Stop 12.14
362 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
364 TestStartStop/group/embed-certs/serial/SecondStart 270.77
365 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.17
366 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
367 TestStartStop/group/no-preload/serial/Pause 4.53
369 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.58
370 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
372 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.54
374 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 279.15
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
378 TestStartStop/group/embed-certs/serial/Pause 3.15
380 TestStartStop/group/newest-cni/serial/FirstStart 36.19
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.38
383 TestStartStop/group/newest-cni/serial/Stop 1.29
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
385 TestStartStop/group/newest-cni/serial/SecondStart 17.85
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
389 TestStartStop/group/newest-cni/serial/Pause 3.12
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.93
x
+
TestDownloadOnly/v1.20.0/json-events (11.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-154248 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-154248 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.573567016s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-154248
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-154248: exit status 85 (216.96215ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-154248 | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC |          |
	|         | -p download-only-154248        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:21:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:21:40.424251  298660 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:21:40.424453  298660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:21:40.424466  298660 out.go:358] Setting ErrFile to fd 2...
	I0910 18:21:40.424471  298660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:21:40.424742  298660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
	W0910 18:21:40.424929  298660 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19598-293262/.minikube/config/config.json: open /home/jenkins/minikube-integration/19598-293262/.minikube/config/config.json: no such file or directory
	I0910 18:21:40.425389  298660 out.go:352] Setting JSON to true
	I0910 18:21:40.426350  298660 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7451,"bootTime":1725985050,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0910 18:21:40.426426  298660 start.go:139] virtualization:  
	I0910 18:21:40.431474  298660 out.go:97] [download-only-154248] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0910 18:21:40.431644  298660 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19598-293262/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 18:21:40.431687  298660 notify.go:220] Checking for updates...
	I0910 18:21:40.433590  298660 out.go:169] MINIKUBE_LOCATION=19598
	I0910 18:21:40.435536  298660 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:21:40.438255  298660 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	I0910 18:21:40.439839  298660 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	I0910 18:21:40.441856  298660 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0910 18:21:40.445595  298660 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 18:21:40.445917  298660 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:21:40.465205  298660 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 18:21:40.465298  298660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:21:40.529349  298660 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-10 18:21:40.520102528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:21:40.529469  298660 docker.go:318] overlay module found
	I0910 18:21:40.531383  298660 out.go:97] Using the docker driver based on user configuration
	I0910 18:21:40.531406  298660 start.go:297] selected driver: docker
	I0910 18:21:40.531412  298660 start.go:901] validating driver "docker" against <nil>
	I0910 18:21:40.531514  298660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:21:40.586722  298660 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-10 18:21:40.577149373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:21:40.586882  298660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 18:21:40.587181  298660 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0910 18:21:40.587329  298660 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 18:21:40.589492  298660 out.go:169] Using Docker driver with root privileges
	I0910 18:21:40.591165  298660 cni.go:84] Creating CNI manager for ""
	I0910 18:21:40.591182  298660 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0910 18:21:40.591198  298660 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 18:21:40.591285  298660 start.go:340] cluster config:
	{Name:download-only-154248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-154248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:21:40.593093  298660 out.go:97] Starting "download-only-154248" primary control-plane node in "download-only-154248" cluster
	I0910 18:21:40.593114  298660 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0910 18:21:40.595083  298660 out.go:97] Pulling base image v0.0.45-1725963390-19606 ...
	I0910 18:21:40.595106  298660 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0910 18:21:40.595252  298660 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local docker daemon
	I0910 18:21:40.611163  298660 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 to local cache
	I0910 18:21:40.611798  298660 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory
	I0910 18:21:40.611918  298660 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 to local cache
	I0910 18:21:40.657229  298660 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0910 18:21:40.657256  298660 cache.go:56] Caching tarball of preloaded images
	I0910 18:21:40.657971  298660 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0910 18:21:40.660165  298660 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0910 18:21:40.660198  298660 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0910 18:21:40.735619  298660 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19598-293262/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0910 18:21:49.899412  298660 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0910 18:21:49.899536  298660 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19598-293262/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-154248 host does not exist
	  To start a cluster, run: "minikube start -p download-only-154248"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-154248
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-256966 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-256966 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.895164031s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-256966
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-256966: exit status 85 (67.933635ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-154248 | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC |                     |
	|         | -p download-only-154248        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC | 10 Sep 24 18:21 UTC |
	| delete  | -p download-only-154248        | download-only-154248 | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC | 10 Sep 24 18:21 UTC |
	| start   | -o=json --download-only        | download-only-256966 | jenkins | v1.34.0 | 10 Sep 24 18:21 UTC |                     |
	|         | -p download-only-256966        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:21:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:21:52.568595  298862 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:21:52.568813  298862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:21:52.568842  298862 out.go:358] Setting ErrFile to fd 2...
	I0910 18:21:52.568861  298862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:21:52.569153  298862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
	I0910 18:21:52.569628  298862 out.go:352] Setting JSON to true
	I0910 18:21:52.570587  298862 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7463,"bootTime":1725985050,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0910 18:21:52.570693  298862 start.go:139] virtualization:  
	I0910 18:21:52.573379  298862 out.go:97] [download-only-256966] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0910 18:21:52.573641  298862 notify.go:220] Checking for updates...
	I0910 18:21:52.575474  298862 out.go:169] MINIKUBE_LOCATION=19598
	I0910 18:21:52.577341  298862 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:21:52.579035  298862 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	I0910 18:21:52.580810  298862 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	I0910 18:21:52.582544  298862 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0910 18:21:52.585957  298862 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 18:21:52.586256  298862 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:21:52.607207  298862 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 18:21:52.607316  298862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:21:52.678733  298862 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-10 18:21:52.669325624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:21:52.678847  298862 docker.go:318] overlay module found
	I0910 18:21:52.680918  298862 out.go:97] Using the docker driver based on user configuration
	I0910 18:21:52.680961  298862 start.go:297] selected driver: docker
	I0910 18:21:52.680969  298862 start.go:901] validating driver "docker" against <nil>
	I0910 18:21:52.681109  298862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:21:52.744480  298862 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-10 18:21:52.735017143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:21:52.744661  298862 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 18:21:52.745013  298862 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0910 18:21:52.745174  298862 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 18:21:52.747387  298862 out.go:169] Using Docker driver with root privileges
	I0910 18:21:52.749061  298862 cni.go:84] Creating CNI manager for ""
	I0910 18:21:52.749086  298862 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0910 18:21:52.749096  298862 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 18:21:52.749171  298862 start.go:340] cluster config:
	{Name:download-only-256966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-256966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:21:52.750864  298862 out.go:97] Starting "download-only-256966" primary control-plane node in "download-only-256966" cluster
	I0910 18:21:52.750901  298862 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0910 18:21:52.752633  298862 out.go:97] Pulling base image v0.0.45-1725963390-19606 ...
	I0910 18:21:52.752687  298862 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0910 18:21:52.752838  298862 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local docker daemon
	I0910 18:21:52.768608  298862 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 to local cache
	I0910 18:21:52.768744  298862 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory
	I0910 18:21:52.768764  298862 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory, skipping pull
	I0910 18:21:52.768769  298862 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 exists in cache, skipping pull
	I0910 18:21:52.768796  298862 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 as a tarball
	I0910 18:21:52.811219  298862 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0910 18:21:52.811260  298862 cache.go:56] Caching tarball of preloaded images
	I0910 18:21:52.812123  298862 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0910 18:21:52.813944  298862 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0910 18:21:52.813973  298862 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0910 18:21:52.898414  298862 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19598-293262/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0910 18:21:56.729337  298862 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0910 18:21:56.729457  298862 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19598-293262/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-256966 host does not exist
	  To start a cluster, run: "minikube start -p download-only-256966"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-256966
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.9s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-811787 --alsologtostderr --binary-mirror http://127.0.0.1:43871 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-811787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-811787
--- PASS: TestBinaryMirror (0.90s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-827965
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-827965: exit status 85 (78.361991ms)

                                                
                                                
-- stdout --
	* Profile "addons-827965" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-827965"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-827965
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-827965: exit status 85 (83.652842ms)

                                                
                                                
-- stdout --
	* Profile "addons-827965" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-827965"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (220.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-827965 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-827965 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m40.144753141s)
--- PASS: TestAddons/Setup (220.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-827965 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-827965 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 22.221333ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7t72n" [8fb774f6-bb57-4fa6-b59f-756cfe1a578c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.023427263s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-z6m52" [b1b42e77-8421-4f88-b61b-d3f2d3e35d90] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012435108s
addons_test.go:342: (dbg) Run:  kubectl --context addons-827965 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-827965 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-827965 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.986141361s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.17s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-827965 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-827965 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-827965 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ad1356ef-43e4-4797-bc49-f226b553db3f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ad1356ef-43e4-4797-bc49-f226b553db3f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004190966s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-827965 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-827965 addons disable ingress-dns --alsologtostderr -v=1: (1.401061586s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-827965 addons disable ingress --alsologtostderr -v=1: (8.060801207s)
--- PASS: TestAddons/parallel/Ingress (20.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b2s4d" [42586159-83ee-4fc6-8f18-fdeb15adfcde] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003971579s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-827965
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-827965: (5.882191436s)
--- PASS: TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.803689ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-57b72" [52cb36ea-2cee-4a6c-a843-050157ac918f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004435908s
addons_test.go:417: (dbg) Run:  kubectl --context addons-827965 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 24.906319ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-827965 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/09/10 18:29:34 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-827965 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1c027d00-6889-428c-bc20-0c01df1e47a3] Pending
helpers_test.go:344: "task-pv-pod" [1c027d00-6889-428c-bc20-0c01df1e47a3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1c027d00-6889-428c-bc20-0c01df1e47a3] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004339512s
addons_test.go:590: (dbg) Run:  kubectl --context addons-827965 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-827965 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-827965 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-827965 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-827965 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-827965 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-827965 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [14b443c8-5dd5-49ff-a0b9-504c2ecb5ad9] Pending
helpers_test.go:344: "task-pv-pod-restore" [14b443c8-5dd5-49ff-a0b9-504c2ecb5ad9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [14b443c8-5dd5-49ff-a0b9-504c2ecb5ad9] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004004619s
addons_test.go:632: (dbg) Run:  kubectl --context addons-827965 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-827965 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-827965 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-827965 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.933910878s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-827965 addons disable volumesnapshots --alsologtostderr -v=1: (1.206195409s)
--- PASS: TestAddons/parallel/CSI (54.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-827965 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-827965 --alsologtostderr -v=1: (1.00921266s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-pbkxh" [9a0cb3d3-cb5f-4ce3-969b-16c73f4b18dd] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-pbkxh" [9a0cb3d3-cb5f-4ce3-969b-16c73f4b18dd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-pbkxh" [9a0cb3d3-cb5f-4ce3-969b-16c73f4b18dd] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00322675s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-827965 addons disable headlamp --alsologtostderr -v=1: (5.749095196s)
--- PASS: TestAddons/parallel/Headlamp (16.76s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-lb4q4" [1cd3016c-7175-479a-a39b-8e7ef48c5113] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005535857s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-827965
--- PASS: TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.81s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-827965 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-827965 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827965 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b69b6135-9db0-478e-8b3b-72dfd144618d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b69b6135-9db0-478e-8b3b-72dfd144618d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b69b6135-9db0-478e-8b3b-72dfd144618d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004508741s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-827965 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 ssh "cat /opt/local-path-provisioner/pvc-b05ee4d4-c81d-4a73-8087-1d216c82ac49_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-827965 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-827965 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-827965 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.506860897s)
--- PASS: TestAddons/parallel/LocalPath (52.81s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cjl9w" [cf6a3aee-788b-4415-a203-22cbc57cda34] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003614334s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-827965
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-62hjp" [c24f4bb7-093f-4c8a-85da-57092ba9f155] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004465209s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-827965 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-827965 addons disable yakd --alsologtostderr -v=1: (5.824385019s)
--- PASS: TestAddons/parallel/Yakd (10.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-827965
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-827965: (12.014080328s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-827965
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-827965
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-827965
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestCertOptions (39.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-429730 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-429730 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.977521483s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-429730 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-429730 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-429730 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-429730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-429730
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-429730: (2.038690979s)
--- PASS: TestCertOptions (39.72s)

                                                
                                    
x
+
TestCertExpiration (229.73s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-121667 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-121667 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.006156692s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-121667 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-121667 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.531282074s)
helpers_test.go:175: Cleaning up "cert-expiration-121667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-121667
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-121667: (3.19031257s)
--- PASS: TestCertExpiration (229.73s)

                                                
                                    
x
+
TestForceSystemdFlag (50.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-286958 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-286958 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (47.732877751s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-286958 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-286958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-286958
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-286958: (2.161971891s)
--- PASS: TestForceSystemdFlag (50.20s)

                                                
                                    
x
+
TestForceSystemdEnv (43.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-606143 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-606143 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.494255656s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-606143 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-606143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-606143
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-606143: (2.556503819s)
--- PASS: TestForceSystemdEnv (43.50s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.44s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-993412 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-993412 --driver=docker  --container-runtime=containerd: (33.641344485s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-993412"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-1aAQhDaQGa0k/agent.317942" SSH_AGENT_PID="317943" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-1aAQhDaQGa0k/agent.317942" SSH_AGENT_PID="317943" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-1aAQhDaQGa0k/agent.317942" SSH_AGENT_PID="317943" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.320407586s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-1aAQhDaQGa0k/agent.317942" SSH_AGENT_PID="317943" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-1aAQhDaQGa0k/agent.317942" SSH_AGENT_PID="317943" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls": (1.008408243s)
helpers_test.go:175: Cleaning up "dockerenv-993412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-993412
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-993412: (2.038302486s)
--- PASS: TestDockerEnvContainerd (49.44s)

                                                
                                    
x
+
TestErrorSpam/setup (32.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-278807 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-278807 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-278807 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-278807 --driver=docker  --container-runtime=containerd: (32.505904236s)
--- PASS: TestErrorSpam/setup (32.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 stop: (1.29678157s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-278807 --log_dir /tmp/nospam-278807 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19598-293262/.minikube/files/etc/test/nested/copy/298655/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-370349 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-370349 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m7.184303545s)
--- PASS: TestFunctional/serial/StartWithProxy (67.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-370349 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-370349 --alsologtostderr -v=8: (6.572214287s)
functional_test.go:663: soft start took 6.575098035s for "functional-370349" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-370349 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 cache add registry.k8s.io/pause:3.1: (1.569472407s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 cache add registry.k8s.io/pause:3.3: (1.3176237s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 cache add registry.k8s.io/pause:latest: (1.157484865s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-370349 /tmp/TestFunctionalserialCacheCmdcacheadd_local3199010159/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 cache add minikube-local-cache-test:functional-370349
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 cache delete minikube-local-cache-test:functional-370349
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-370349
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-370349 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.680074ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 cache reload: (1.109093642s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 kubectl -- --context functional-370349 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-370349 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.69s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-370349 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-370349 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.690796071s)
functional_test.go:761: restart took 45.690951976s for "functional-370349" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.69s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-370349 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 logs: (1.762155278s)
--- PASS: TestFunctional/serial/LogsCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 logs --file /tmp/TestFunctionalserialLogsFileCmd4065638149/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 logs --file /tmp/TestFunctionalserialLogsFileCmd4065638149/001/logs.txt: (1.769070528s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-370349 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-370349
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-370349: exit status 115 (679.945863ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31512 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-370349 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-370349 config get cpus: exit status 14 (102.15039ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-370349 config get cpus: exit status 14 (74.323169ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-370349 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-370349 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 335106: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-370349 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-370349 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (244.38664ms)

                                                
                                                
-- stdout --
	* [functional-370349] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:35:53.412909  334575 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:35:53.413054  334575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:35:53.413076  334575 out.go:358] Setting ErrFile to fd 2...
	I0910 18:35:53.413083  334575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:35:53.413366  334575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
	I0910 18:35:53.413768  334575 out.go:352] Setting JSON to false
	I0910 18:35:53.414827  334575 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8304,"bootTime":1725985050,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0910 18:35:53.414907  334575 start.go:139] virtualization:  
	I0910 18:35:53.419429  334575 out.go:177] * [functional-370349] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0910 18:35:53.422550  334575 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:35:53.426026  334575 notify.go:220] Checking for updates...
	I0910 18:35:53.431317  334575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:35:53.433214  334575 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	I0910 18:35:53.435351  334575 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	I0910 18:35:53.438369  334575 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0910 18:35:53.440450  334575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:35:53.442618  334575 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0910 18:35:53.443176  334575 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:35:53.475454  334575 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 18:35:53.475571  334575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:35:53.566109  334575 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-10 18:35:53.555979561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:35:53.566229  334575 docker.go:318] overlay module found
	I0910 18:35:53.568308  334575 out.go:177] * Using the docker driver based on existing profile
	I0910 18:35:53.569895  334575 start.go:297] selected driver: docker
	I0910 18:35:53.569914  334575 start.go:901] validating driver "docker" against &{Name:functional-370349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-370349 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:35:53.570027  334575 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:35:53.572344  334575 out.go:201] 
	W0910 18:35:53.574102  334575 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0910 18:35:53.575757  334575 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-370349 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-370349 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-370349 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (290.278529ms)

                                                
                                                
-- stdout --
	* [functional-370349] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:35:53.783033  334687 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:35:53.783239  334687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:35:53.783252  334687 out.go:358] Setting ErrFile to fd 2...
	I0910 18:35:53.783259  334687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:35:53.783665  334687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
	I0910 18:35:53.784073  334687 out.go:352] Setting JSON to false
	I0910 18:35:53.785119  334687 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8304,"bootTime":1725985050,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0910 18:35:53.785202  334687 start.go:139] virtualization:  
	I0910 18:35:53.789178  334687 out.go:177] * [functional-370349] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0910 18:35:53.791040  334687 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:35:53.791152  334687 notify.go:220] Checking for updates...
	I0910 18:35:53.794622  334687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:35:53.796205  334687 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	I0910 18:35:53.797783  334687 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	I0910 18:35:53.799555  334687 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0910 18:35:53.801331  334687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:35:53.804449  334687 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0910 18:35:53.805858  334687 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:35:53.853682  334687 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 18:35:53.854032  334687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:35:53.973836  334687 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-10 18:35:53.96376452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:35:53.973944  334687 docker.go:318] overlay module found
	I0910 18:35:53.975853  334687 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0910 18:35:53.977639  334687 start.go:297] selected driver: docker
	I0910 18:35:53.977656  334687 start.go:901] validating driver "docker" against &{Name:functional-370349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-370349 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:35:53.977761  334687 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:35:53.979970  334687 out.go:201] 
	W0910 18:35:53.981838  334687 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0910 18:35:53.983694  334687 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 status
E0910 18:35:41.457809  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 status -o json
E0910 18:35:42.099826  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-370349 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-370349 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-c8vrw" [83bfde87-de03-4c1d-81c4-a805304ba30c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-c8vrw" [83bfde87-de03-4c1d-81c4-a805304ba30c] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003674333s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 service hello-node-connect --url
E0910 18:35:40.811903  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:35:40.818830  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:35:40.830219  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:35:40.851556  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:35:40.892910  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:35:40.974402  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:35:41.136199  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31473
functional_test.go:1675: http://192.168.49.2:31473: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-c8vrw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31473
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [05a8d9f1-aae5-4589-a4ad-e6fdf6c92b8c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00378525s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-370349 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-370349 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-370349 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-370349 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d2fe77ef-7164-4ec8-84e9-d3c367c45337] Pending
helpers_test.go:344: "sp-pod" [d2fe77ef-7164-4ec8-84e9-d3c367c45337] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d2fe77ef-7164-4ec8-84e9-d3c367c45337] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004127258s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-370349 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-370349 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-370349 delete -f testdata/storage-provisioner/pod.yaml: (1.69115055s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-370349 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [23b8ece5-10db-4c43-a344-571bbaaac364] Pending
helpers_test.go:344: "sp-pod" [23b8ece5-10db-4c43-a344-571bbaaac364] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [23b8ece5-10db-4c43-a344-571bbaaac364] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004838327s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-370349 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh -n functional-370349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 cp functional-370349:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4175422652/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh -n functional-370349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh -n functional-370349 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/298655/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo cat /etc/test/nested/copy/298655/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/298655.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo cat /etc/ssl/certs/298655.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/298655.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo cat /usr/share/ca-certificates/298655.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2986552.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo cat /etc/ssl/certs/2986552.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2986552.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo cat /usr/share/ca-certificates/2986552.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-370349 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-370349 ssh "sudo systemctl is-active docker": exit status 1 (401.262389ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-370349 ssh "sudo systemctl is-active crio": exit status 1 (347.16963ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 version -o=json --components: (1.344302099s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-370349 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-370349
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-370349
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-370349 image ls --format short --alsologtostderr:
I0910 18:35:56.178587  335142 out.go:345] Setting OutFile to fd 1 ...
I0910 18:35:56.178733  335142 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:35:56.178741  335142 out.go:358] Setting ErrFile to fd 2...
I0910 18:35:56.178747  335142 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:35:56.179009  335142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
I0910 18:35:56.179649  335142 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0910 18:35:56.179777  335142 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0910 18:35:56.180261  335142 cli_runner.go:164] Run: docker container inspect functional-370349 --format={{.State.Status}}
I0910 18:35:56.198462  335142 ssh_runner.go:195] Run: systemctl --version
I0910 18:35:56.198517  335142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-370349
I0910 18:35:56.216394  335142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/functional-370349/id_rsa Username:docker}
I0910 18:35:56.305270  335142 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-370349 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| docker.io/kicbase/echo-server               | functional-370349  | sha256:ce2d2c | 2.17MB |
| docker.io/library/minikube-local-cache-test | functional-370349  | sha256:c5f151 | 991B   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-370349 image ls --format table --alsologtostderr:
I0910 18:35:56.640376  335204 out.go:345] Setting OutFile to fd 1 ...
I0910 18:35:56.640570  335204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:35:56.640600  335204 out.go:358] Setting ErrFile to fd 2...
I0910 18:35:56.640623  335204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:35:56.640946  335204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
I0910 18:35:56.641619  335204 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0910 18:35:56.641803  335204 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0910 18:35:56.642321  335204 cli_runner.go:164] Run: docker container inspect functional-370349 --format={{.State.Status}}
I0910 18:35:56.658743  335204 ssh_runner.go:195] Run: systemctl --version
I0910 18:35:56.658798  335204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-370349
I0910 18:35:56.685646  335204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/functional-370349/id_rsa Username:docker}
I0910 18:35:56.777627  335204 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-370349 image ls --format json --alsologtostderr:
[{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f43
82fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686
869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:c5f1510fecfbac683bbcc1c25e73146dc19f9ebcfaa45f93ac54d0f199dd387d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-370349"],"size":"991"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee652
1f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f2
71f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-370349"],"size":"2173567"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-370349 image ls --format json --alsologtostderr:
I0910 18:35:56.405529  335173 out.go:345] Setting OutFile to fd 1 ...
I0910 18:35:56.405668  335173 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:35:56.405681  335173 out.go:358] Setting ErrFile to fd 2...
I0910 18:35:56.405710  335173 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:35:56.406067  335173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
I0910 18:35:56.407321  335173 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0910 18:35:56.407527  335173 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0910 18:35:56.408101  335173 cli_runner.go:164] Run: docker container inspect functional-370349 --format={{.State.Status}}
I0910 18:35:56.425063  335173 ssh_runner.go:195] Run: systemctl --version
I0910 18:35:56.425118  335173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-370349
I0910 18:35:56.443460  335173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/functional-370349/id_rsa Username:docker}
I0910 18:35:56.539033  335173 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-370349 image ls --format yaml --alsologtostderr:
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-370349
size: "2173567"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:c5f1510fecfbac683bbcc1c25e73146dc19f9ebcfaa45f93ac54d0f199dd387d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-370349
size: "991"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-370349 image ls --format yaml --alsologtostderr:
I0910 18:35:56.891124  335238 out.go:345] Setting OutFile to fd 1 ...
I0910 18:35:56.891302  335238 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:35:56.891316  335238 out.go:358] Setting ErrFile to fd 2...
I0910 18:35:56.891322  335238 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:35:56.891585  335238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
I0910 18:35:56.892281  335238 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0910 18:35:56.892455  335238 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0910 18:35:56.893045  335238 cli_runner.go:164] Run: docker container inspect functional-370349 --format={{.State.Status}}
I0910 18:35:56.913609  335238 ssh_runner.go:195] Run: systemctl --version
I0910 18:35:56.913689  335238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-370349
I0910 18:35:56.944588  335238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/functional-370349/id_rsa Username:docker}
I0910 18:35:57.046045  335238 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-370349 ssh pgrep buildkitd: exit status 1 (288.590993ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image build -t localhost/my-image:functional-370349 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 image build -t localhost/my-image:functional-370349 testdata/build --alsologtostderr: (3.249666384s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-370349 image build -t localhost/my-image:functional-370349 testdata/build --alsologtostderr:
I0910 18:35:57.489469  335458 out.go:345] Setting OutFile to fd 1 ...
I0910 18:35:57.490033  335458 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:35:57.490068  335458 out.go:358] Setting ErrFile to fd 2...
I0910 18:35:57.490094  335458 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 18:35:57.490362  335458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
I0910 18:35:57.491066  335458 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0910 18:35:57.492123  335458 config.go:182] Loaded profile config "functional-370349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0910 18:35:57.492676  335458 cli_runner.go:164] Run: docker container inspect functional-370349 --format={{.State.Status}}
I0910 18:35:57.511960  335458 ssh_runner.go:195] Run: systemctl --version
I0910 18:35:57.512028  335458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-370349
I0910 18:35:57.532700  335458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/functional-370349/id_rsa Username:docker}
I0910 18:35:57.621946  335458 build_images.go:161] Building image from path: /tmp/build.152515055.tar
I0910 18:35:57.622023  335458 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0910 18:35:57.634999  335458 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.152515055.tar
I0910 18:35:57.639475  335458 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.152515055.tar: stat -c "%s %y" /var/lib/minikube/build/build.152515055.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.152515055.tar': No such file or directory
I0910 18:35:57.639506  335458 ssh_runner.go:362] scp /tmp/build.152515055.tar --> /var/lib/minikube/build/build.152515055.tar (3072 bytes)
I0910 18:35:57.669421  335458 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.152515055
I0910 18:35:57.683073  335458 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.152515055 -xf /var/lib/minikube/build/build.152515055.tar
I0910 18:35:57.693917  335458 containerd.go:394] Building image: /var/lib/minikube/build/build.152515055
I0910 18:35:57.694015  335458 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.152515055 --local dockerfile=/var/lib/minikube/build/build.152515055 --output type=image,name=localhost/my-image:functional-370349
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B 0.0s done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.4s done
#8 exporting manifest sha256:ed79f27433bd32450755771b39c93002ac1c420b86fa4217420a7f47b8909843
#8 exporting manifest sha256:ed79f27433bd32450755771b39c93002ac1c420b86fa4217420a7f47b8909843 0.0s done
#8 exporting config sha256:e3f4b95f667fdf208033752d737e6db5f70d7a32866b065f95b73cb23b3282a9 0.0s done
#8 naming to localhost/my-image:functional-370349 done
#8 DONE 0.4s
I0910 18:36:00.653253  335458 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.152515055 --local dockerfile=/var/lib/minikube/build/build.152515055 --output type=image,name=localhost/my-image:functional-370349: (2.959175635s)
I0910 18:36:00.653344  335458 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.152515055
I0910 18:36:00.664333  335458 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.152515055.tar
I0910 18:36:00.674483  335458 build_images.go:217] Built localhost/my-image:functional-370349 from /tmp/build.152515055.tar
I0910 18:36:00.674516  335458 build_images.go:133] succeeded building to: functional-370349
I0910 18:36:00.674521  335458 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image ls
E0910 18:36:01.308137  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
2024/09/10 18:36:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-370349
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image load --daemon kicbase/echo-server:functional-370349 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 image load --daemon kicbase/echo-server:functional-370349 --alsologtostderr: (1.19211527s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image load --daemon kicbase/echo-server:functional-370349 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 image load --daemon kicbase/echo-server:functional-370349 --alsologtostderr: (1.139921113s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-370349 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-370349 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-cm8xg" [f06d8bdc-c0b6-4ed1-a23c-de59c876d468] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-cm8xg" [f06d8bdc-c0b6-4ed1-a23c-de59c876d468] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004405441s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-370349
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image load --daemon kicbase/echo-server:functional-370349 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-370349 image load --daemon kicbase/echo-server:functional-370349 --alsologtostderr: (1.123340747s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image save kicbase/echo-server:functional-370349 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image rm kicbase/echo-server:functional-370349 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-370349
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 image save --daemon kicbase/echo-server:functional-370349 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-370349
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-370349 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-370349 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-370349 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 331529: os: process already finished
helpers_test.go:508: unable to kill pid 331414: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-370349 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-370349 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-370349 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e1b573b2-92e2-4747-a22a-14367dfbc56c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e1b573b2-92e2-4747-a22a-14367dfbc56c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003414979s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 service list -o json
functional_test.go:1494: Took "339.380218ms" to run "out/minikube-linux-arm64 -p functional-370349 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30529
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30529
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-370349 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.111.203 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-370349 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "360.408711ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "95.931253ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
E0910 18:35:43.382005  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1366: Took "322.07509ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "51.122617ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdany-port2002432/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725993343456834664" to /tmp/TestFunctionalparallelMountCmdany-port2002432/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725993343456834664" to /tmp/TestFunctionalparallelMountCmdany-port2002432/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725993343456834664" to /tmp/TestFunctionalparallelMountCmdany-port2002432/001/test-1725993343456834664
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-370349 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (336.915716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 10 18:35 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 10 18:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 10 18:35 test-1725993343456834664
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh cat /mount-9p/test-1725993343456834664
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-370349 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5eaa8809-5233-49f2-922d-f6610e03b979] Pending
E0910 18:35:45.944251  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [5eaa8809-5233-49f2-922d-f6610e03b979] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5eaa8809-5233-49f2-922d-f6610e03b979] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5eaa8809-5233-49f2-922d-f6610e03b979] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004469066s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-370349 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdany-port2002432/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdspecific-port796692456/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-370349 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (326.564334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0910 18:35:51.066298  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdspecific-port796692456/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-370349 ssh "sudo umount -f /mount-9p": exit status 1 (291.378296ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-370349 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdspecific-port796692456/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2533669982/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2533669982/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2533669982/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-370349 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-370349 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2533669982/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2533669982/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-370349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2533669982/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-370349
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-370349
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-370349
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (115.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-546439 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0910 18:36:21.790070  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:02.752340  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-546439 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m54.474693078s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (115.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (43.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- rollout status deployment/busybox
E0910 18:38:24.675009  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-546439 -- rollout status deployment/busybox: (40.177734459s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-8pjl5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-kg77s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-q8szd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-8pjl5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-kg77s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-q8szd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-8pjl5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-kg77s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-q8szd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (43.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-8pjl5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-8pjl5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-kg77s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-kg77s -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-q8szd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-546439 -- exec busybox-7dff88458-q8szd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-546439 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-546439 -v=7 --alsologtostderr: (23.588834396s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-546439 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp testdata/cp-test.txt ha-546439:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1465435220/001/cp-test_ha-546439.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439:/home/docker/cp-test.txt ha-546439-m02:/home/docker/cp-test_ha-546439_ha-546439-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m02 "sudo cat /home/docker/cp-test_ha-546439_ha-546439-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439:/home/docker/cp-test.txt ha-546439-m03:/home/docker/cp-test_ha-546439_ha-546439-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m03 "sudo cat /home/docker/cp-test_ha-546439_ha-546439-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439:/home/docker/cp-test.txt ha-546439-m04:/home/docker/cp-test_ha-546439_ha-546439-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m04 "sudo cat /home/docker/cp-test_ha-546439_ha-546439-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp testdata/cp-test.txt ha-546439-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1465435220/001/cp-test_ha-546439-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m02:/home/docker/cp-test.txt ha-546439:/home/docker/cp-test_ha-546439-m02_ha-546439.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439 "sudo cat /home/docker/cp-test_ha-546439-m02_ha-546439.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m02:/home/docker/cp-test.txt ha-546439-m03:/home/docker/cp-test_ha-546439-m02_ha-546439-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m03 "sudo cat /home/docker/cp-test_ha-546439-m02_ha-546439-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m02:/home/docker/cp-test.txt ha-546439-m04:/home/docker/cp-test_ha-546439-m02_ha-546439-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m04 "sudo cat /home/docker/cp-test_ha-546439-m02_ha-546439-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp testdata/cp-test.txt ha-546439-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1465435220/001/cp-test_ha-546439-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m03:/home/docker/cp-test.txt ha-546439:/home/docker/cp-test_ha-546439-m03_ha-546439.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439 "sudo cat /home/docker/cp-test_ha-546439-m03_ha-546439.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m03:/home/docker/cp-test.txt ha-546439-m02:/home/docker/cp-test_ha-546439-m03_ha-546439-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m02 "sudo cat /home/docker/cp-test_ha-546439-m03_ha-546439-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m03:/home/docker/cp-test.txt ha-546439-m04:/home/docker/cp-test_ha-546439-m03_ha-546439-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m04 "sudo cat /home/docker/cp-test_ha-546439-m03_ha-546439-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp testdata/cp-test.txt ha-546439-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1465435220/001/cp-test_ha-546439-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m04:/home/docker/cp-test.txt ha-546439:/home/docker/cp-test_ha-546439-m04_ha-546439.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439 "sudo cat /home/docker/cp-test_ha-546439-m04_ha-546439.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m04:/home/docker/cp-test.txt ha-546439-m02:/home/docker/cp-test_ha-546439-m04_ha-546439-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m02 "sudo cat /home/docker/cp-test_ha-546439-m04_ha-546439-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 cp ha-546439-m04:/home/docker/cp-test.txt ha-546439-m03:/home/docker/cp-test_ha-546439-m04_ha-546439-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 ssh -n ha-546439-m03 "sudo cat /home/docker/cp-test_ha-546439-m04_ha-546439-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-546439 node stop m02 -v=7 --alsologtostderr: (12.149910305s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr: exit status 7 (736.123871ms)

                                                
                                                
-- stdout --
	ha-546439
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-546439-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-546439-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-546439-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:39:43.499706  351777 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:39:43.499932  351777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:39:43.499961  351777 out.go:358] Setting ErrFile to fd 2...
	I0910 18:39:43.499981  351777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:39:43.500259  351777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
	I0910 18:39:43.500497  351777 out.go:352] Setting JSON to false
	I0910 18:39:43.500564  351777 mustload.go:65] Loading cluster: ha-546439
	I0910 18:39:43.500596  351777 notify.go:220] Checking for updates...
	I0910 18:39:43.501263  351777 config.go:182] Loaded profile config "ha-546439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0910 18:39:43.501313  351777 status.go:255] checking status of ha-546439 ...
	I0910 18:39:43.501890  351777 cli_runner.go:164] Run: docker container inspect ha-546439 --format={{.State.Status}}
	I0910 18:39:43.523960  351777 status.go:330] ha-546439 host status = "Running" (err=<nil>)
	I0910 18:39:43.524065  351777 host.go:66] Checking if "ha-546439" exists ...
	I0910 18:39:43.524386  351777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-546439
	I0910 18:39:43.556839  351777 host.go:66] Checking if "ha-546439" exists ...
	I0910 18:39:43.557188  351777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:39:43.557252  351777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-546439
	I0910 18:39:43.578486  351777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/ha-546439/id_rsa Username:docker}
	I0910 18:39:43.670317  351777 ssh_runner.go:195] Run: systemctl --version
	I0910 18:39:43.676928  351777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:39:43.689528  351777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:39:43.753337  351777 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-10 18:39:43.742315054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:39:43.754042  351777 kubeconfig.go:125] found "ha-546439" server: "https://192.168.49.254:8443"
	I0910 18:39:43.754081  351777 api_server.go:166] Checking apiserver status ...
	I0910 18:39:43.754131  351777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:39:43.766968  351777 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	I0910 18:39:43.777532  351777 api_server.go:182] apiserver freezer: "8:freezer:/docker/05f5ecbda46b74e92bcd739c6b3d743fe228af72cb4de7ea53cc1b563d7aaccd/kubepods/burstable/pod39c6fa7ad74c3a649a32eb2ebeb52cba/ccc8d162cb3feca2cc7e9937c3e95ed9d2d06f8f297b999b1e69212c47ae33d8"
	I0910 18:39:43.777604  351777 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/05f5ecbda46b74e92bcd739c6b3d743fe228af72cb4de7ea53cc1b563d7aaccd/kubepods/burstable/pod39c6fa7ad74c3a649a32eb2ebeb52cba/ccc8d162cb3feca2cc7e9937c3e95ed9d2d06f8f297b999b1e69212c47ae33d8/freezer.state
	I0910 18:39:43.786887  351777 api_server.go:204] freezer state: "THAWED"
	I0910 18:39:43.786918  351777 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0910 18:39:43.795173  351777 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0910 18:39:43.795233  351777 status.go:422] ha-546439 apiserver status = Running (err=<nil>)
	I0910 18:39:43.795246  351777 status.go:257] ha-546439 status: &{Name:ha-546439 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:39:43.795270  351777 status.go:255] checking status of ha-546439-m02 ...
	I0910 18:39:43.795598  351777 cli_runner.go:164] Run: docker container inspect ha-546439-m02 --format={{.State.Status}}
	I0910 18:39:43.813060  351777 status.go:330] ha-546439-m02 host status = "Stopped" (err=<nil>)
	I0910 18:39:43.813088  351777 status.go:343] host is not running, skipping remaining checks
	I0910 18:39:43.813098  351777 status.go:257] ha-546439-m02 status: &{Name:ha-546439-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:39:43.813120  351777 status.go:255] checking status of ha-546439-m03 ...
	I0910 18:39:43.813481  351777 cli_runner.go:164] Run: docker container inspect ha-546439-m03 --format={{.State.Status}}
	I0910 18:39:43.830459  351777 status.go:330] ha-546439-m03 host status = "Running" (err=<nil>)
	I0910 18:39:43.830486  351777 host.go:66] Checking if "ha-546439-m03" exists ...
	I0910 18:39:43.830819  351777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-546439-m03
	I0910 18:39:43.846522  351777 host.go:66] Checking if "ha-546439-m03" exists ...
	I0910 18:39:43.846844  351777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:39:43.846892  351777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-546439-m03
	I0910 18:39:43.863941  351777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/ha-546439-m03/id_rsa Username:docker}
	I0910 18:39:43.954884  351777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:39:43.967436  351777 kubeconfig.go:125] found "ha-546439" server: "https://192.168.49.254:8443"
	I0910 18:39:43.967467  351777 api_server.go:166] Checking apiserver status ...
	I0910 18:39:43.967519  351777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:39:43.979041  351777 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1317/cgroup
	I0910 18:39:43.988633  351777 api_server.go:182] apiserver freezer: "8:freezer:/docker/1f0ec67c2589a5f321bf1bc9cb942905b0c12f79733a583750e82c94c8519008/kubepods/burstable/podf263b5bd090bcf28c27ee8a3937d7141/00679c600961e29e75b3e460255803fb951895e00b432c1afcc178afade0d30a"
	I0910 18:39:43.988710  351777 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1f0ec67c2589a5f321bf1bc9cb942905b0c12f79733a583750e82c94c8519008/kubepods/burstable/podf263b5bd090bcf28c27ee8a3937d7141/00679c600961e29e75b3e460255803fb951895e00b432c1afcc178afade0d30a/freezer.state
	I0910 18:39:43.998726  351777 api_server.go:204] freezer state: "THAWED"
	I0910 18:39:43.998753  351777 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0910 18:39:44.008768  351777 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0910 18:39:44.009584  351777 status.go:422] ha-546439-m03 apiserver status = Running (err=<nil>)
	I0910 18:39:44.009816  351777 status.go:257] ha-546439-m03 status: &{Name:ha-546439-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:39:44.009902  351777 status.go:255] checking status of ha-546439-m04 ...
	I0910 18:39:44.012053  351777 cli_runner.go:164] Run: docker container inspect ha-546439-m04 --format={{.State.Status}}
	I0910 18:39:44.036451  351777 status.go:330] ha-546439-m04 host status = "Running" (err=<nil>)
	I0910 18:39:44.036482  351777 host.go:66] Checking if "ha-546439-m04" exists ...
	I0910 18:39:44.036841  351777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-546439-m04
	I0910 18:39:44.055876  351777 host.go:66] Checking if "ha-546439-m04" exists ...
	I0910 18:39:44.056264  351777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:39:44.056311  351777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-546439-m04
	I0910 18:39:44.075164  351777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/ha-546439-m04/id_rsa Username:docker}
	I0910 18:39:44.162291  351777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:39:44.175163  351777 status.go:257] ha-546439-m04 status: &{Name:ha-546439-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-546439 node start m02 -v=7 --alsologtostderr: (17.621286144s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr: (1.04237123s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-546439 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-546439 -v=7 --alsologtostderr
E0910 18:40:18.675240  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:18.681673  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:18.693117  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:18.714565  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:18.756984  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:18.838292  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:19.000290  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:19.322164  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:19.964230  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:21.245865  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:23.808931  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:28.931183  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:39.172919  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:40.812201  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-546439 -v=7 --alsologtostderr: (37.150134886s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-546439 --wait=true -v=7 --alsologtostderr
E0910 18:40:59.654831  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:08.516538  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:40.617060  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-546439 --wait=true -v=7 --alsologtostderr: (1m43.193044031s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-546439
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-546439 node delete m03 -v=7 --alsologtostderr: (9.906481173s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 stop -v=7 --alsologtostderr
E0910 18:43:02.539075  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-546439 stop -v=7 --alsologtostderr: (36.413087886s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr: exit status 7 (111.593606ms)

                                                
                                                
-- stdout --
	ha-546439
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-546439-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-546439-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:43:12.656871  366044 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:43:12.657015  366044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:43:12.657026  366044 out.go:358] Setting ErrFile to fd 2...
	I0910 18:43:12.657032  366044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:43:12.657287  366044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
	I0910 18:43:12.657476  366044 out.go:352] Setting JSON to false
	I0910 18:43:12.657512  366044 mustload.go:65] Loading cluster: ha-546439
	I0910 18:43:12.657628  366044 notify.go:220] Checking for updates...
	I0910 18:43:12.657929  366044 config.go:182] Loaded profile config "ha-546439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0910 18:43:12.657948  366044 status.go:255] checking status of ha-546439 ...
	I0910 18:43:12.658466  366044 cli_runner.go:164] Run: docker container inspect ha-546439 --format={{.State.Status}}
	I0910 18:43:12.677153  366044 status.go:330] ha-546439 host status = "Stopped" (err=<nil>)
	I0910 18:43:12.677175  366044 status.go:343] host is not running, skipping remaining checks
	I0910 18:43:12.677182  366044 status.go:257] ha-546439 status: &{Name:ha-546439 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:43:12.677222  366044 status.go:255] checking status of ha-546439-m02 ...
	I0910 18:43:12.677536  366044 cli_runner.go:164] Run: docker container inspect ha-546439-m02 --format={{.State.Status}}
	I0910 18:43:12.695653  366044 status.go:330] ha-546439-m02 host status = "Stopped" (err=<nil>)
	I0910 18:43:12.695678  366044 status.go:343] host is not running, skipping remaining checks
	I0910 18:43:12.695685  366044 status.go:257] ha-546439-m02 status: &{Name:ha-546439-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:43:12.695720  366044 status.go:255] checking status of ha-546439-m04 ...
	I0910 18:43:12.696025  366044 cli_runner.go:164] Run: docker container inspect ha-546439-m04 --format={{.State.Status}}
	I0910 18:43:12.717659  366044 status.go:330] ha-546439-m04 host status = "Stopped" (err=<nil>)
	I0910 18:43:12.717683  366044 status.go:343] host is not running, skipping remaining checks
	I0910 18:43:12.717691  366044 status.go:257] ha-546439-m04 status: &{Name:ha-546439-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-546439 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-546439 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m20.369782185s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-546439 --control-plane -v=7 --alsologtostderr
E0910 18:45:18.674855  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-546439 --control-plane -v=7 --alsologtostderr: (45.386499256s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-546439 status -v=7 --alsologtostderr: (1.062267634s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-550625 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0910 18:45:40.811607  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:46.380966  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-550625 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (59.606408329s)
--- PASS: TestJSONOutput/start/Command (59.61s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.87s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-550625 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-550625 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-550625 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-550625 --output=json --user=testUser: (5.775100564s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-081146 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-081146 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.034715ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fd62cbbc-5b61-4c55-aab9-53167bccffb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-081146] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"12495fbd-87fe-46e1-85e3-ab228948b9a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"1a597600-cac9-4413-a4f2-689dd59773a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bf192b6a-3766-4959-b418-59041284debe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig"}}
	{"specversion":"1.0","id":"087d1bc4-7d60-451b-9445-c99dc1f45071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube"}}
	{"specversion":"1.0","id":"f4f11e97-f102-47a3-8e8b-d3cf3c3d99e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4dbe42a0-60ac-49a6-96ed-aa030961272c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c10b2410-96f2-4808-8210-9151fcf64d21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-081146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-081146
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-080190 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-080190 --network=: (37.48882245s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-080190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-080190
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-080190: (2.134766495s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.65s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-642852 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-642852 --network=bridge: (35.588046167s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-642852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-642852
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-642852: (2.045988533s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.66s)

                                                
                                    
x
+
TestKicExistingNetwork (33.47s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-302595 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-302595 --network=existing-network: (31.333013473s)
helpers_test.go:175: Cleaning up "existing-network-302595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-302595
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-302595: (1.970183294s)
--- PASS: TestKicExistingNetwork (33.47s)

                                                
                                    
x
+
TestKicCustomSubnet (32.11s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-367576 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-367576 --subnet=192.168.60.0/24: (29.929836247s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-367576 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-367576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-367576
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-367576: (2.151730973s)
--- PASS: TestKicCustomSubnet (32.11s)

                                                
                                    
x
+
TestKicStaticIP (34.57s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-531476 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-531476 --static-ip=192.168.200.200: (32.287312951s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-531476 ip
helpers_test.go:175: Cleaning up "static-ip-531476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-531476
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-531476: (2.104916239s)
--- PASS: TestKicStaticIP (34.57s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-071796 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-071796 --driver=docker  --container-runtime=containerd: (33.247627496s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-074529 --driver=docker  --container-runtime=containerd
E0910 18:50:18.675107  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:40.812110  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-074529 --driver=docker  --container-runtime=containerd: (34.490030047s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-071796
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-074529
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-074529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-074529
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-074529: (2.160400965s)
helpers_test.go:175: Cleaning up "first-071796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-071796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-071796: (2.281661181s)
--- PASS: TestMinikubeProfile (73.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-830431 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-830431 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.340315403s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-830431 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-844092 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-844092 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.96239367s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-844092 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-830431 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-830431 --alsologtostderr -v=5: (1.626479875s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-844092 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-844092
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-844092: (1.195309693s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.64s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-844092
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-844092: (6.63519057s)
--- PASS: TestMountStart/serial/RestartStopped (7.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-844092 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-700445 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0910 18:52:03.879075  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-700445 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.257832525s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-700445 -- rollout status deployment/busybox: (15.021911449s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- exec busybox-7dff88458-gl54n -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- exec busybox-7dff88458-glxwj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- exec busybox-7dff88458-gl54n -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- exec busybox-7dff88458-glxwj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- exec busybox-7dff88458-gl54n -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- exec busybox-7dff88458-glxwj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- exec busybox-7dff88458-gl54n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- exec busybox-7dff88458-gl54n -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- exec busybox-7dff88458-glxwj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-700445 -- exec busybox-7dff88458-glxwj -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-700445 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-700445 -v 3 --alsologtostderr: (19.834695642s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-700445 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp testdata/cp-test.txt multinode-700445:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp multinode-700445:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3381683232/001/cp-test_multinode-700445.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp multinode-700445:/home/docker/cp-test.txt multinode-700445-m02:/home/docker/cp-test_multinode-700445_multinode-700445-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m02 "sudo cat /home/docker/cp-test_multinode-700445_multinode-700445-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp multinode-700445:/home/docker/cp-test.txt multinode-700445-m03:/home/docker/cp-test_multinode-700445_multinode-700445-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m03 "sudo cat /home/docker/cp-test_multinode-700445_multinode-700445-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp testdata/cp-test.txt multinode-700445-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp multinode-700445-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3381683232/001/cp-test_multinode-700445-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp multinode-700445-m02:/home/docker/cp-test.txt multinode-700445:/home/docker/cp-test_multinode-700445-m02_multinode-700445.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445 "sudo cat /home/docker/cp-test_multinode-700445-m02_multinode-700445.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp multinode-700445-m02:/home/docker/cp-test.txt multinode-700445-m03:/home/docker/cp-test_multinode-700445-m02_multinode-700445-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m03 "sudo cat /home/docker/cp-test_multinode-700445-m02_multinode-700445-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp testdata/cp-test.txt multinode-700445-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp multinode-700445-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3381683232/001/cp-test_multinode-700445-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp multinode-700445-m03:/home/docker/cp-test.txt multinode-700445:/home/docker/cp-test_multinode-700445-m03_multinode-700445.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445 "sudo cat /home/docker/cp-test_multinode-700445-m03_multinode-700445.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 cp multinode-700445-m03:/home/docker/cp-test.txt multinode-700445-m02:/home/docker/cp-test_multinode-700445-m03_multinode-700445-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 ssh -n multinode-700445-m02 "sudo cat /home/docker/cp-test_multinode-700445-m03_multinode-700445-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-700445 node stop m03: (1.190313178s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-700445 status: exit status 7 (514.099719ms)

                                                
                                                
-- stdout --
	multinode-700445
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-700445-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-700445-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-700445 status --alsologtostderr: exit status 7 (503.036771ms)

                                                
                                                
-- stdout --
	multinode-700445
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-700445-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-700445-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:53:27.944481  419469 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:53:27.944683  419469 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:53:27.944710  419469 out.go:358] Setting ErrFile to fd 2...
	I0910 18:53:27.944728  419469 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:53:27.945135  419469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
	I0910 18:53:27.945413  419469 out.go:352] Setting JSON to false
	I0910 18:53:27.945478  419469 mustload.go:65] Loading cluster: multinode-700445
	I0910 18:53:27.945573  419469 notify.go:220] Checking for updates...
	I0910 18:53:27.945965  419469 config.go:182] Loaded profile config "multinode-700445": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0910 18:53:27.946002  419469 status.go:255] checking status of multinode-700445 ...
	I0910 18:53:27.946564  419469 cli_runner.go:164] Run: docker container inspect multinode-700445 --format={{.State.Status}}
	I0910 18:53:27.966910  419469 status.go:330] multinode-700445 host status = "Running" (err=<nil>)
	I0910 18:53:27.966935  419469 host.go:66] Checking if "multinode-700445" exists ...
	I0910 18:53:27.967240  419469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-700445
	I0910 18:53:27.996530  419469 host.go:66] Checking if "multinode-700445" exists ...
	I0910 18:53:27.996980  419469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:53:27.997039  419469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-700445
	I0910 18:53:28.024878  419469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/multinode-700445/id_rsa Username:docker}
	I0910 18:53:28.114154  419469 ssh_runner.go:195] Run: systemctl --version
	I0910 18:53:28.118626  419469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:53:28.130867  419469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:53:28.186111  419469 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-10 18:53:28.176161138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:53:28.186712  419469 kubeconfig.go:125] found "multinode-700445" server: "https://192.168.67.2:8443"
	I0910 18:53:28.186744  419469 api_server.go:166] Checking apiserver status ...
	I0910 18:53:28.186788  419469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:53:28.198518  419469 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1445/cgroup
	I0910 18:53:28.208580  419469 api_server.go:182] apiserver freezer: "8:freezer:/docker/d6519b91232d6d17a347e87c50326f0b1de17b0cd2be1a955cb2e573a682e8a8/kubepods/burstable/podc27fc142ee92b0c1d03ca7ce9687c681/5b4b9b69d9d35bcd1e2870e9fa0f7ba74333cf09ce862855d257b4b61285b008"
	I0910 18:53:28.208669  419469 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d6519b91232d6d17a347e87c50326f0b1de17b0cd2be1a955cb2e573a682e8a8/kubepods/burstable/podc27fc142ee92b0c1d03ca7ce9687c681/5b4b9b69d9d35bcd1e2870e9fa0f7ba74333cf09ce862855d257b4b61285b008/freezer.state
	I0910 18:53:28.217334  419469 api_server.go:204] freezer state: "THAWED"
	I0910 18:53:28.217365  419469 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0910 18:53:28.225921  419469 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0910 18:53:28.225960  419469 status.go:422] multinode-700445 apiserver status = Running (err=<nil>)
	I0910 18:53:28.225996  419469 status.go:257] multinode-700445 status: &{Name:multinode-700445 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:53:28.226018  419469 status.go:255] checking status of multinode-700445-m02 ...
	I0910 18:53:28.226332  419469 cli_runner.go:164] Run: docker container inspect multinode-700445-m02 --format={{.State.Status}}
	I0910 18:53:28.242435  419469 status.go:330] multinode-700445-m02 host status = "Running" (err=<nil>)
	I0910 18:53:28.242459  419469 host.go:66] Checking if "multinode-700445-m02" exists ...
	I0910 18:53:28.242790  419469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-700445-m02
	I0910 18:53:28.259110  419469 host.go:66] Checking if "multinode-700445-m02" exists ...
	I0910 18:53:28.259438  419469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:53:28.259491  419469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-700445-m02
	I0910 18:53:28.276934  419469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/19598-293262/.minikube/machines/multinode-700445-m02/id_rsa Username:docker}
	I0910 18:53:28.365912  419469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:53:28.377577  419469 status.go:257] multinode-700445-m02 status: &{Name:multinode-700445-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:53:28.377611  419469 status.go:255] checking status of multinode-700445-m03 ...
	I0910 18:53:28.377924  419469 cli_runner.go:164] Run: docker container inspect multinode-700445-m03 --format={{.State.Status}}
	I0910 18:53:28.394724  419469 status.go:330] multinode-700445-m03 host status = "Stopped" (err=<nil>)
	I0910 18:53:28.394751  419469 status.go:343] host is not running, skipping remaining checks
	I0910 18:53:28.394759  419469 status.go:257] multinode-700445-m03 status: &{Name:multinode-700445-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-700445 node start m03 -v=7 --alsologtostderr: (9.173893714s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (99.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-700445
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-700445
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-700445: (24.975596395s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-700445 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-700445 --wait=true -v=8 --alsologtostderr: (1m14.172040211s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-700445
--- PASS: TestMultiNode/serial/RestartKeepsNodes (99.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 node delete m03
E0910 18:55:18.674784  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-700445 node delete m03: (4.981112818s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 stop
E0910 18:55:40.811315  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-700445 stop: (23.864501502s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-700445 status: exit status 7 (97.178138ms)

                                                
                                                
-- stdout --
	multinode-700445
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-700445-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-700445 status --alsologtostderr: exit status 7 (87.332389ms)

                                                
                                                
-- stdout --
	multinode-700445
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-700445-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:55:47.242768  427931 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:55:47.243237  427931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:55:47.243252  427931 out.go:358] Setting ErrFile to fd 2...
	I0910 18:55:47.243258  427931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:55:47.243581  427931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
	I0910 18:55:47.243826  427931 out.go:352] Setting JSON to false
	I0910 18:55:47.243882  427931 mustload.go:65] Loading cluster: multinode-700445
	I0910 18:55:47.244328  427931 config.go:182] Loaded profile config "multinode-700445": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0910 18:55:47.244348  427931 status.go:255] checking status of multinode-700445 ...
	I0910 18:55:47.244899  427931 cli_runner.go:164] Run: docker container inspect multinode-700445 --format={{.State.Status}}
	I0910 18:55:47.244967  427931 notify.go:220] Checking for updates...
	I0910 18:55:47.262361  427931 status.go:330] multinode-700445 host status = "Stopped" (err=<nil>)
	I0910 18:55:47.262383  427931 status.go:343] host is not running, skipping remaining checks
	I0910 18:55:47.262390  427931 status.go:257] multinode-700445 status: &{Name:multinode-700445 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:55:47.262421  427931 status.go:255] checking status of multinode-700445-m02 ...
	I0910 18:55:47.262730  427931 cli_runner.go:164] Run: docker container inspect multinode-700445-m02 --format={{.State.Status}}
	I0910 18:55:47.279162  427931 status.go:330] multinode-700445-m02 host status = "Stopped" (err=<nil>)
	I0910 18:55:47.279183  427931 status.go:343] host is not running, skipping remaining checks
	I0910 18:55:47.279190  427931 status.go:257] multinode-700445-m02 status: &{Name:multinode-700445-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-700445 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-700445 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.350045382s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-700445 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.10s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-700445
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-700445-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-700445-m02 --driver=docker  --container-runtime=containerd: exit status 14 (79.079939ms)

                                                
                                                
-- stdout --
	* [multinode-700445-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-700445-m02' is duplicated with machine name 'multinode-700445-m02' in profile 'multinode-700445'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-700445-m03 --driver=docker  --container-runtime=containerd
E0910 18:56:41.743093  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-700445-m03 --driver=docker  --container-runtime=containerd: (30.813282s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-700445
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-700445: exit status 80 (324.717155ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-700445 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-700445-m03 already exists in multinode-700445-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-700445-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-700445-m03: (2.008204605s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.27s)

                                                
                                    
x
+
TestPreload (125.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-760708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-760708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m27.675035683s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-760708 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-760708 image pull gcr.io/k8s-minikube/busybox: (1.910775898s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-760708
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-760708: (12.021991287s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-760708 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-760708 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.676833629s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-760708 image list
helpers_test.go:175: Cleaning up "test-preload-760708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-760708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-760708: (2.362156725s)
--- PASS: TestPreload (125.96s)

                                                
                                    
x
+
TestScheduledStopUnix (106.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-472012 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-472012 --memory=2048 --driver=docker  --container-runtime=containerd: (30.276393341s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472012 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-472012 -n scheduled-stop-472012
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472012 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472012 --cancel-scheduled
E0910 19:00:18.674614  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-472012 -n scheduled-stop-472012
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-472012
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472012 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0910 19:00:40.811943  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-472012
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-472012: exit status 7 (73.181336ms)

                                                
                                                
-- stdout --
	scheduled-stop-472012
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-472012 -n scheduled-stop-472012
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-472012 -n scheduled-stop-472012: exit status 7 (67.233613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-472012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-472012
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-472012: (5.117935821s)
--- PASS: TestScheduledStopUnix (106.98s)

                                                
                                    
x
+
TestInsufficientStorage (11.33s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-900117 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-900117 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.782879198s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a98b7f5c-db82-45ef-9056-211ac7c24b43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-900117] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"108a925e-0991-4d07-aaa3-e7fd355fe8df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"41938d95-1321-41f2-8938-932e48a352b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8dbb8b16-3fb1-49a7-b288-33ce7c8e77b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig"}}
	{"specversion":"1.0","id":"680dbe77-e460-4402-a522-d4ac3363af3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube"}}
	{"specversion":"1.0","id":"5f57eb23-c8c4-4431-9eb5-63d6acf47c96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9f7b58ae-26b7-4dc0-9adc-fbe28c221184","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c41b0015-0292-4f69-b619-4c691646f071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d1a3667d-b0c3-41ee-bf7d-bcbf0db3689a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fa845323-f75d-4c63-b399-00835022a49f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e34aedfd-0950-4caf-b077-aaa2c8d785d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"15b822db-a818-47d7-8e7c-cf06e8db682c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-900117\" primary control-plane node in \"insufficient-storage-900117\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c878db7-954d-4126-bee3-6b12ca12350f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1725963390-19606 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4de364a7-6331-4f70-9331-d07b1e7d78d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f13da6a1-0b7b-4d6d-9355-ebe85943a920","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-900117 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-900117 --output=json --layout=cluster: exit status 7 (289.444778ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-900117","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-900117","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 19:01:19.751115  446541 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-900117" does not appear in /home/jenkins/minikube-integration/19598-293262/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-900117 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-900117 --output=json --layout=cluster: exit status 7 (343.168683ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-900117","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-900117","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 19:01:20.095038  446603 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-900117" does not appear in /home/jenkins/minikube-integration/19598-293262/kubeconfig
	E0910 19:01:20.106349  446603 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/insufficient-storage-900117/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-900117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-900117
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-900117: (1.914250323s)
--- PASS: TestInsufficientStorage (11.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1765815237 start -p running-upgrade-110961 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0910 19:10:40.811474  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1765815237 start -p running-upgrade-110961 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.465212529s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-110961 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-110961 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.546556222s)
helpers_test.go:175: Cleaning up "running-upgrade-110961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-110961
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-110961: (2.896581303s)
--- PASS: TestRunningBinaryUpgrade (82.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-629173 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-629173 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.079243073s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-629173
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-629173: (1.25269007s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-629173 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-629173 status --format={{.Host}}: exit status 7 (89.414155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-629173 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-629173 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m39.409026572s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-629173 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-629173 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-629173 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (102.340517ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-629173] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-629173
	    minikube start -p kubernetes-upgrade-629173 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6291732 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-629173 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-629173 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-629173 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.08456005s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-629173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-629173
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-629173: (2.814393042s)
--- PASS: TestKubernetesUpgrade (349.98s)

                                                
                                    
x
+
TestMissingContainerUpgrade (181.09s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3858516353 start -p missing-upgrade-080098 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3858516353 start -p missing-upgrade-080098 --memory=2200 --driver=docker  --container-runtime=containerd: (1m40.199069517s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-080098
E0910 19:05:18.674848  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-080098: (10.277505703s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-080098
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-080098 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0910 19:05:40.812057  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-080098 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.098617393s)
helpers_test.go:175: Cleaning up "missing-upgrade-080098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-080098
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-080098: (2.585244785s)
--- PASS: TestMissingContainerUpgrade (181.09s)

                                                
                                    
x
+
TestPause/serial/Start (69.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-495884 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-495884 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m9.68147957s)
--- PASS: TestPause/serial/Start (69.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-396171 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-396171 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (99.132138ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-396171] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-396171 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-396171 --driver=docker  --container-runtime=containerd: (42.123268174s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-396171 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-396171 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-396171 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.695926932s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-396171 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-396171 status -o json: exit status 2 (312.509024ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-396171","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-396171
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-396171: (2.00594307s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-396171 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-396171 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.768951836s)
--- PASS: TestNoKubernetes/serial/Start (8.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-396171 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-396171 "sudo systemctl is-active --quiet service kubelet": exit status 1 (327.835372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-495884 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-495884 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.622328551s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-396171
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-396171: (1.268577051s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-396171 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-396171 --driver=docker  --container-runtime=containerd: (7.368107239s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                    
x
+
TestPause/serial/Pause (1.01s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-495884 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-495884 --alsologtostderr -v=5: (1.006227946s)
--- PASS: TestPause/serial/Pause (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-495884 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-495884 --output=json --layout=cluster: exit status 2 (394.799327ms)

                                                
                                                
-- stdout --
	{"Name":"pause-495884","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-495884","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-495884 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.06s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-495884 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-495884 --alsologtostderr -v=5: (1.055561447s)
--- PASS: TestPause/serial/PauseAgain (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-396171 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-396171 "sudo systemctl is-active --quiet service kubelet": exit status 1 (364.751681ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-495884 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-495884 --alsologtostderr -v=5: (3.022346181s)
--- PASS: TestPause/serial/DeletePaused (3.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-495884
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-495884: exit status 1 (24.35323ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-495884: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-942570 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-942570 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (235.030648ms)

                                                
                                                
-- stdout --
	* [false-942570] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 19:02:48.458355  458939 out.go:345] Setting OutFile to fd 1 ...
	I0910 19:02:48.458667  458939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:02:48.458700  458939 out.go:358] Setting ErrFile to fd 2...
	I0910 19:02:48.458730  458939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:02:48.459059  458939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-293262/.minikube/bin
	I0910 19:02:48.459596  458939 out.go:352] Setting JSON to false
	I0910 19:02:48.460554  458939 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9919,"bootTime":1725985050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0910 19:02:48.460661  458939 start.go:139] virtualization:  
	I0910 19:02:48.464460  458939 out.go:177] * [false-942570] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0910 19:02:48.467024  458939 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 19:02:48.467090  458939 notify.go:220] Checking for updates...
	I0910 19:02:48.473896  458939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 19:02:48.476168  458939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-293262/kubeconfig
	I0910 19:02:48.478125  458939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-293262/.minikube
	I0910 19:02:48.480447  458939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0910 19:02:48.482728  458939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 19:02:48.485318  458939 config.go:182] Loaded profile config "force-systemd-env-606143": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0910 19:02:48.485425  458939 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 19:02:48.520305  458939 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 19:02:48.520410  458939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 19:02:48.612571  458939 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:58 SystemTime:2024-09-10 19:02:48.602127214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 19:02:48.612690  458939 docker.go:318] overlay module found
	I0910 19:02:48.617832  458939 out.go:177] * Using the docker driver based on user configuration
	I0910 19:02:48.619784  458939 start.go:297] selected driver: docker
	I0910 19:02:48.619799  458939 start.go:901] validating driver "docker" against <nil>
	I0910 19:02:48.619814  458939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 19:02:48.621949  458939 out.go:201] 
	W0910 19:02:48.623848  458939 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0910 19:02:48.626230  458939 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-942570 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-942570" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-942570

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942570"

                                                
                                                
----------------------- debugLogs end: false-942570 [took: 5.313234971s] --------------------------------
helpers_test.go:175: Cleaning up "false-942570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-942570
--- PASS: TestNetworkPlugins/group/false (5.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (116.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1496156971 start -p stopped-upgrade-947013 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0910 19:08:43.880473  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1496156971 start -p stopped-upgrade-947013 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.720279583s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1496156971 -p stopped-upgrade-947013 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1496156971 -p stopped-upgrade-947013 stop: (19.962628428s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-947013 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-947013 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (48.88983115s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (116.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-947013
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-947013: (1.174432614s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m15.508368291s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (51.335513656s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-942570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-942570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ll8vr" [26403973-e3ad-4bc9-bdd6-f18810c962cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ll8vr" [26403973-e3ad-4bc9-bdd6-f18810c962cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004716576s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-942570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bgccp" [daf09e8f-0a6b-465d-9b59-3cb064e9433a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003719029s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-942570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-942570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jd4g5" [a794261e-a26b-4728-a107-78c47f5e84fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0910 19:13:21.745037  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-jd4g5" [a794261e-a26b-4728-a107-78c47f5e84fb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004956216s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-942570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m17.349994177s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m0.983533316s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dwr4r" [7623141f-f250-423e-829e-7a2cf5a566fa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004602568s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-942570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-942570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ktd66" [9d8a6812-46df-4cbf-bcd8-17bbd0d39484] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ktd66" [9d8a6812-46df-4cbf-bcd8-17bbd0d39484] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004210787s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-942570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-942570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5xqpl" [be3ec6a1-89ae-4f8d-9bb2-dcb8668e3073] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5xqpl" [be3ec6a1-89ae-4f8d-9bb2-dcb8668e3073] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003811082s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-942570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-942570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (47.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (47.755542622s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (47.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0910 19:15:40.812136  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.304814948s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-942570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-942570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xl9jm" [9fe0b6ea-60bc-4eeb-a086-ac76803a5b3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xl9jm" [9fe0b6ea-60bc-4eeb-a086-ac76803a5b3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004139251s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-942570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-m9pb5" [f6dac13e-53d2-4c19-8e99-6b633c67bdcd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003939629s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-942570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-942570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rktml" [a6498573-89bc-4110-9a09-0a2740c25089] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rktml" [a6498573-89bc-4110-9a09-0a2740c25089] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004127165s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-942570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-942570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (53.651260184s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (152.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-500483 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-500483 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m32.141095505s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (152.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-942570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-942570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9lsph" [9645a343-b5ec-4ec4-a7dd-d4cb796d2545] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9lsph" [9645a343-b5ec-4ec4-a7dd-d4cb796d2545] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003441583s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-942570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-942570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.27s)
E0910 19:30:18.674296  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-367168 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0910 19:18:22.082734  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/kindnet-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:18:32.324537  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/kindnet-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:18:39.938406  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/auto-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:18:52.806145  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/kindnet-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:20.900498  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/auto-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-367168 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m11.068515903s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-367168 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6b0db637-bd45-43b3-8a9c-59a197dc5bc6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0910 19:19:33.768473  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/kindnet-942570/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [6b0db637-bd45-43b3-8a9c-59a197dc5bc6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.010366117s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-367168 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-367168 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-367168 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.138718858s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-367168 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-367168 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-367168 --alsologtostderr -v=3: (12.193170565s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-500483 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ff053ba8-2fa9-42a1-92fa-a980a3402d25] Pending
helpers_test.go:344: "busybox" [ff053ba8-2fa9-42a1-92fa-a980a3402d25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0910 19:19:48.818962  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:48.825551  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:48.836987  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:48.858392  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:48.899957  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:48.981583  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:49.143019  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:49.464751  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:50.106868  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [ff053ba8-2fa9-42a1-92fa-a980a3402d25] Running
E0910 19:19:51.389039  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:53.951371  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.006107401s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-500483 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-367168 -n no-preload-367168
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-367168 -n no-preload-367168: exit status 7 (79.303639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-367168 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-367168 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-367168 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m27.93432892s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-367168 -n no-preload-367168
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-500483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0910 19:19:56.828375  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:56.834712  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:56.846064  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:56.867437  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:56.908848  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-500483 describe deploy/metrics-server -n kube-system
E0910 19:19:56.990763  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-500483 --alsologtostderr -v=3
E0910 19:19:57.152552  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:57.474805  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:58.116398  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:59.073552  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:59.398732  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:20:01.960173  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:20:07.081977  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:20:09.315053  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-500483 --alsologtostderr -v=3: (12.472024836s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-500483 -n old-k8s-version-500483
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-500483 -n old-k8s-version-500483: exit status 7 (129.318372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-500483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (151.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-500483 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0910 19:20:17.324174  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:20:18.674754  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:20:29.796373  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:20:37.805946  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:20:40.811375  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:20:42.821912  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/auto-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:20:55.690856  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/kindnet-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:10.757937  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:18.767977  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:19.514100  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:19.520533  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:19.532004  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:19.553489  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:19.594865  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:19.676317  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:19.837813  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:20.159122  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:20.800953  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:22.082551  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:24.644039  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:29.765360  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:31.132005  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:31.138489  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:31.149921  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:31.171316  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:31.212741  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:31.294770  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:31.456273  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:31.777991  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:32.420348  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:33.701740  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:36.264077  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:40.016647  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:41.385746  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:21:51.627735  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:00.502058  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:12.109577  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:32.680269  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:40.690145  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-500483 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m31.020565502s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-500483 -n old-k8s-version-500483
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (151.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-88hmz" [7a2b88c4-61a5-4888-921e-29ffc623a3a6] Running
E0910 19:22:41.464072  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:46.261510  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:46.267926  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:46.279486  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:46.300975  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:46.342326  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:46.423665  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:46.584981  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:46.906662  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004600405s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-88hmz" [7a2b88c4-61a5-4888-921e-29ffc623a3a6] Running
E0910 19:22:47.548196  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:48.830216  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:22:51.391911  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003781484s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-500483 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-500483 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-500483 --alsologtostderr -v=1
E0910 19:22:53.071471  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-500483 -n old-k8s-version-500483
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-500483 -n old-k8s-version-500483: exit status 2 (341.487017ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-500483 -n old-k8s-version-500483
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-500483 -n old-k8s-version-500483: exit status 2 (367.614247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-500483 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-500483 -n old-k8s-version-500483
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-500483 -n old-k8s-version-500483
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-842409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0910 19:22:58.960915  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/auto-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:23:06.755161  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:23:11.825381  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/kindnet-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:23:26.663266  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/auto-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:23:27.237326  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:23:39.532679  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/kindnet-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:03.386237  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-842409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m5.912527543s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-842409 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fe66e028-c832-4596-b6e1-25ee216c7f11] Pending
helpers_test.go:344: "busybox" [fe66e028-c832-4596-b6e1-25ee216c7f11] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fe66e028-c832-4596-b6e1-25ee216c7f11] Running
E0910 19:24:08.199285  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.007440218s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-842409 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-842409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-842409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.360287511s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-842409 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-842409 --alsologtostderr -v=3
E0910 19:24:14.993276  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-842409 --alsologtostderr -v=3: (12.141320001s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lvnlf" [45f075ac-aaa2-43d0-a030-08af241c333c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003864351s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-842409 -n embed-certs-842409
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-842409 -n embed-certs-842409: exit status 7 (72.762769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-842409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (270.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-842409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-842409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m30.397619339s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-842409 -n embed-certs-842409
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (270.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lvnlf" [45f075ac-aaa2-43d0-a030-08af241c333c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004922938s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-367168 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-367168 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-367168 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-367168 --alsologtostderr -v=1: (1.315173694s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-367168 -n no-preload-367168
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-367168 -n no-preload-367168: exit status 2 (482.901998ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-367168 -n no-preload-367168
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-367168 -n no-preload-367168: exit status 2 (477.13789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-367168 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-367168 --alsologtostderr -v=1: (1.096324485s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-367168 -n no-preload-367168
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-367168 -n no-preload-367168
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-510008 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0910 19:24:47.739894  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:47.746265  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:47.757624  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:47.779013  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:47.820388  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:47.901777  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:48.063283  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:48.385299  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:48.818958  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:49.027234  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:50.308483  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:52.870428  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:56.827845  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:24:57.992564  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:25:08.234371  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:25:16.521622  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:25:18.674711  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:25:23.881790  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:25:24.531710  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:25:28.715958  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:25:30.121453  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-510008 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (54.575264057s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-510008 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cb13b548-a3bc-43d8-9368-70aaa846a80e] Pending
helpers_test.go:344: "busybox" [cb13b548-a3bc-43d8-9368-70aaa846a80e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0910 19:25:40.811634  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [cb13b548-a3bc-43d8-9368-70aaa846a80e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004038575s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-510008 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-510008 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-510008 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.064532097s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-510008 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-510008 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-510008 --alsologtostderr -v=3: (12.086955586s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-510008 -n default-k8s-diff-port-510008
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-510008 -n default-k8s-diff-port-510008: exit status 7 (226.844261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-510008 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (279.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-510008 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0910 19:26:09.677607  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:26:19.514729  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:26:31.131670  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:26:47.227677  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/enable-default-cni-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:26:58.834669  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:27:31.599338  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:27:46.261399  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:27:58.961046  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/auto-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:28:11.825027  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/kindnet-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:28:13.965334  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/bridge-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-510008 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m38.821790522s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-510008 -n default-k8s-diff-port-510008
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (279.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-l2wwv" [d79ece43-b5fc-4cdd-8f0b-e45be1d270a3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003123998s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-l2wwv" [d79ece43-b5fc-4cdd-8f0b-e45be1d270a3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005213709s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-842409 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-842409 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-842409 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-842409 -n embed-certs-842409
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-842409 -n embed-certs-842409: exit status 2 (377.675644ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-842409 -n embed-certs-842409
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-842409 -n embed-certs-842409: exit status 2 (330.404808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-842409 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-842409 -n embed-certs-842409
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-842409 -n embed-certs-842409
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-999397 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0910 19:29:31.662941  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:31.669304  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:31.680749  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:31.702142  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:31.743671  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:31.825161  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:31.986916  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:32.308266  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:32.950417  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:34.232000  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:36.793272  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:41.915295  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:47.739664  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:29:48.819853  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/calico-942570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-999397 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (36.192995391s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-999397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0910 19:29:52.157123  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-999397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.382355223s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-999397 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-999397 --alsologtostderr -v=3: (1.290960137s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-999397 -n newest-cni-999397
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-999397 -n newest-cni-999397: exit status 7 (73.15857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-999397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-999397 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0910 19:29:56.828757  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/custom-flannel-942570/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:30:01.746762  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/functional-370349/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-999397 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (17.338105878s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-999397 -n newest-cni-999397
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-999397 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-999397 --alsologtostderr -v=1
E0910 19:30:12.638910  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-999397 -n newest-cni-999397
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-999397 -n newest-cni-999397: exit status 2 (330.047471ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-999397 -n newest-cni-999397
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-999397 -n newest-cni-999397: exit status 2 (359.436816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-999397 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-999397 -n newest-cni-999397
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-999397 -n newest-cni-999397
E0910 19:30:15.442974  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/old-k8s-version-500483/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-22ltb" [e9086772-208c-482b-884d-9a66782bf7ab] Running
E0910 19:30:40.811788  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/addons-827965/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004116313s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-22ltb" [e9086772-208c-482b-884d-9a66782bf7ab] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003793923s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-510008 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-510008 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-510008 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-510008 -n default-k8s-diff-port-510008
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-510008 -n default-k8s-diff-port-510008: exit status 2 (317.188474ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-510008 -n default-k8s-diff-port-510008
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-510008 -n default-k8s-diff-port-510008: exit status 2 (316.723621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-510008 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-510008 -n default-k8s-diff-port-510008
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-510008 -n default-k8s-diff-port-510008
E0910 19:30:53.600714  298655 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-293262/.minikube/profiles/no-preload-367168/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.93s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-096745 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-096745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-096745
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-942570 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-942570" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-942570

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942570"

                                                
                                                
----------------------- debugLogs end: kubenet-942570 [took: 4.353131337s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-942570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-942570
--- SKIP: TestNetworkPlugins/group/kubenet (4.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-942570 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-942570" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-942570

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-942570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942570"

                                                
                                                
----------------------- debugLogs end: cilium-942570 [took: 4.787206632s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-942570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-942570
--- SKIP: TestNetworkPlugins/group/cilium (5.03s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-250102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-250102
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard