Test Report: Docker_Linux_containerd_arm64 19700

                    
                      8b226b9d2c09f79dcc3a887682b5a6bd27a95904:2024-09-24:36357
                    
                

Test fail (1/327)

Order failed test Duration
29 TestAddons/serial/Volcano 199.89
x
+
TestAddons/serial/Volcano (199.89s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 51.866056ms
addons_test.go:851: volcano-controller stabilized in 52.40467ms
addons_test.go:835: volcano-scheduler stabilized in 52.525391ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-97rkk" [96966f75-5651-47aa-9fc5-d4bb6aa841e7] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003147544s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-8t7dh" [e0aae584-7a8a-4bb0-ac30-ba3b3e6ca1bf] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004307014s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-pn52r" [88652b45-4c2b-462c-af32-2d8b915d21f2] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003050985s
addons_test.go:870: (dbg) Run:  kubectl --context addons-783184 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-783184 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-783184 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4c89eabf-2c9f-4830-9afd-83c40711e3bd] Pending
helpers_test.go:344: "test-job-nginx-0" [4c89eabf-2c9f-4830-9afd-83c40711e3bd] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-783184 -n addons-783184
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-24 18:44:19.374567651 +0000 UTC m=+433.126042153
addons_test.go:902: (dbg) Run:  kubectl --context addons-783184 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-783184 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-7fe1d191-e2e6-42c6-ba17-b9bc550d33ba
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86hww (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-86hww:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-783184 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-783184 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-783184
helpers_test.go:235: (dbg) docker inspect addons-783184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "93cfb52902bae015f7fb092c45ebfbb70f894edeb0e6d6a19458867275c5e98d",
	        "Created": "2024-09-24T18:37:50.001329723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 446685,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-24T18:37:50.165089597Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/93cfb52902bae015f7fb092c45ebfbb70f894edeb0e6d6a19458867275c5e98d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/93cfb52902bae015f7fb092c45ebfbb70f894edeb0e6d6a19458867275c5e98d/hostname",
	        "HostsPath": "/var/lib/docker/containers/93cfb52902bae015f7fb092c45ebfbb70f894edeb0e6d6a19458867275c5e98d/hosts",
	        "LogPath": "/var/lib/docker/containers/93cfb52902bae015f7fb092c45ebfbb70f894edeb0e6d6a19458867275c5e98d/93cfb52902bae015f7fb092c45ebfbb70f894edeb0e6d6a19458867275c5e98d-json.log",
	        "Name": "/addons-783184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-783184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-783184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/19db1dc4f3167e5ab74099e75eb4382ce47bd1c0f36ae31b6c65302bf8a75474-init/diff:/var/lib/docker/overlay2/2e74424088d657fa7b20c4e08819e2c12796efa2ff7323c17fa30eba84a8b965/diff",
	                "MergedDir": "/var/lib/docker/overlay2/19db1dc4f3167e5ab74099e75eb4382ce47bd1c0f36ae31b6c65302bf8a75474/merged",
	                "UpperDir": "/var/lib/docker/overlay2/19db1dc4f3167e5ab74099e75eb4382ce47bd1c0f36ae31b6c65302bf8a75474/diff",
	                "WorkDir": "/var/lib/docker/overlay2/19db1dc4f3167e5ab74099e75eb4382ce47bd1c0f36ae31b6c65302bf8a75474/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-783184",
	                "Source": "/var/lib/docker/volumes/addons-783184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-783184",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-783184",
	                "name.minikube.sigs.k8s.io": "addons-783184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e5c6e3696754ecd48aaa465af175202ef6b1955dd259cada0d71338af09fdf98",
	            "SandboxKey": "/var/run/docker/netns/e5c6e3696754",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-783184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "69ac39ed713fa3e3a619f182622e519bf10b6f9ab774854bca00a598a2227e89",
	                    "EndpointID": "42c1bf0896b2692ebc4e1018ab0dcdfa8f89cf374b3955996d25bd43b1f53ce5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-783184",
	                        "93cfb52902ba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-783184 -n addons-783184
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-783184 logs -n 25: (1.5975606s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-168781   | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC |                     |
	|         | -p download-only-168781              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:37 UTC |
	| delete  | -p download-only-168781              | download-only-168781   | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:37 UTC |
	| start   | -o=json --download-only              | download-only-679007   | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC |                     |
	|         | -p download-only-679007              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:37 UTC |
	| delete  | -p download-only-679007              | download-only-679007   | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:37 UTC |
	| delete  | -p download-only-168781              | download-only-168781   | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:37 UTC |
	| delete  | -p download-only-679007              | download-only-679007   | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:37 UTC |
	| start   | --download-only -p                   | download-docker-263463 | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC |                     |
	|         | download-docker-263463               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-263463            | download-docker-263463 | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:37 UTC |
	| start   | --download-only -p                   | binary-mirror-636468   | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC |                     |
	|         | binary-mirror-636468                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43135               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-636468              | binary-mirror-636468   | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:37 UTC |
	| addons  | disable dashboard -p                 | addons-783184          | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC |                     |
	|         | addons-783184                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-783184          | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC |                     |
	|         | addons-783184                        |                        |         |         |                     |                     |
	| start   | -p addons-783184 --wait=true         | addons-783184          | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:37:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:37:25.242580  446193 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:37:25.242794  446193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:37:25.242821  446193 out.go:358] Setting ErrFile to fd 2...
	I0924 18:37:25.242839  446193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:37:25.243144  446193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
	I0924 18:37:25.243740  446193 out.go:352] Setting JSON to false
	I0924 18:37:25.244766  446193 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8395,"bootTime":1727194651,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0924 18:37:25.244879  446193 start.go:139] virtualization:  
	I0924 18:37:25.249002  446193 out.go:177] * [addons-783184] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 18:37:25.251411  446193 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:37:25.251632  446193 notify.go:220] Checking for updates...
	I0924 18:37:25.255992  446193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:37:25.258348  446193 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	I0924 18:37:25.260650  446193 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	I0924 18:37:25.262930  446193 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 18:37:25.265282  446193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:37:25.267961  446193 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:37:25.299028  446193 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 18:37:25.299147  446193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:37:25.350593  446193 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-24 18:37:25.341109881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:37:25.350715  446193 docker.go:318] overlay module found
	I0924 18:37:25.353351  446193 out.go:177] * Using the docker driver based on user configuration
	I0924 18:37:25.355266  446193 start.go:297] selected driver: docker
	I0924 18:37:25.355290  446193 start.go:901] validating driver "docker" against <nil>
	I0924 18:37:25.355305  446193 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:37:25.355965  446193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:37:25.402414  446193 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-24 18:37:25.392821538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:37:25.402618  446193 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:37:25.402852  446193 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:37:25.405165  446193 out.go:177] * Using Docker driver with root privileges
	I0924 18:37:25.407544  446193 cni.go:84] Creating CNI manager for ""
	I0924 18:37:25.407612  446193 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 18:37:25.407626  446193 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 18:37:25.407714  446193 start.go:340] cluster config:
	{Name:addons-783184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-783184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:37:25.413965  446193 out.go:177] * Starting "addons-783184" primary control-plane node in "addons-783184" cluster
	I0924 18:37:25.416582  446193 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0924 18:37:25.418844  446193 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0924 18:37:25.421069  446193 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 18:37:25.421106  446193 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 18:37:25.421131  446193 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-440051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0924 18:37:25.421141  446193 cache.go:56] Caching tarball of preloaded images
	I0924 18:37:25.421241  446193 preload.go:172] Found /home/jenkins/minikube-integration/19700-440051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 18:37:25.421252  446193 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0924 18:37:25.421700  446193 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/config.json ...
	I0924 18:37:25.421779  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/config.json: {Name:mkd89b82417b0d16afbe94d658bac1df91e49057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:25.436574  446193 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 18:37:25.436728  446193 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0924 18:37:25.436753  446193 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0924 18:37:25.436762  446193 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0924 18:37:25.436770  446193 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0924 18:37:25.436775  446193 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0924 18:37:42.782639  446193 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0924 18:37:42.782680  446193 cache.go:194] Successfully downloaded all kic artifacts
	I0924 18:37:42.782711  446193 start.go:360] acquireMachinesLock for addons-783184: {Name:mkcb2d7912881a5ed1f67f7099182a6679c22d7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:37:42.782831  446193 start.go:364] duration metric: took 92.126µs to acquireMachinesLock for "addons-783184"
	I0924 18:37:42.782861  446193 start.go:93] Provisioning new machine with config: &{Name:addons-783184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-783184 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0924 18:37:42.782958  446193 start.go:125] createHost starting for "" (driver="docker")
	I0924 18:37:42.786518  446193 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0924 18:37:42.786771  446193 start.go:159] libmachine.API.Create for "addons-783184" (driver="docker")
	I0924 18:37:42.786810  446193 client.go:168] LocalClient.Create starting
	I0924 18:37:42.786936  446193 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19700-440051/.minikube/certs/ca.pem
	I0924 18:37:43.227988  446193 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19700-440051/.minikube/certs/cert.pem
	I0924 18:37:43.694790  446193 cli_runner.go:164] Run: docker network inspect addons-783184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0924 18:37:43.708763  446193 cli_runner.go:211] docker network inspect addons-783184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0924 18:37:43.708867  446193 network_create.go:284] running [docker network inspect addons-783184] to gather additional debugging logs...
	I0924 18:37:43.708893  446193 cli_runner.go:164] Run: docker network inspect addons-783184
	W0924 18:37:43.722575  446193 cli_runner.go:211] docker network inspect addons-783184 returned with exit code 1
	I0924 18:37:43.722613  446193 network_create.go:287] error running [docker network inspect addons-783184]: docker network inspect addons-783184: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-783184 not found
	I0924 18:37:43.722627  446193 network_create.go:289] output of [docker network inspect addons-783184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-783184 not found
	
	** /stderr **
	I0924 18:37:43.722725  446193 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0924 18:37:43.739216  446193 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001867d60}
	I0924 18:37:43.739263  446193 network_create.go:124] attempt to create docker network addons-783184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0924 18:37:43.739322  446193 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-783184 addons-783184
	I0924 18:37:43.812785  446193 network_create.go:108] docker network addons-783184 192.168.49.0/24 created
	I0924 18:37:43.812818  446193 kic.go:121] calculated static IP "192.168.49.2" for the "addons-783184" container
	I0924 18:37:43.812901  446193 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0924 18:37:43.827813  446193 cli_runner.go:164] Run: docker volume create addons-783184 --label name.minikube.sigs.k8s.io=addons-783184 --label created_by.minikube.sigs.k8s.io=true
	I0924 18:37:43.844517  446193 oci.go:103] Successfully created a docker volume addons-783184
	I0924 18:37:43.844617  446193 cli_runner.go:164] Run: docker run --rm --name addons-783184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-783184 --entrypoint /usr/bin/test -v addons-783184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0924 18:37:45.895321  446193 cli_runner.go:217] Completed: docker run --rm --name addons-783184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-783184 --entrypoint /usr/bin/test -v addons-783184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.050658761s)
	I0924 18:37:45.895353  446193 oci.go:107] Successfully prepared a docker volume addons-783184
	I0924 18:37:45.895385  446193 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 18:37:45.895410  446193 kic.go:194] Starting extracting preloaded images to volume ...
	I0924 18:37:45.895482  446193 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19700-440051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-783184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0924 18:37:49.934293  446193 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19700-440051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-783184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (4.038770488s)
	I0924 18:37:49.934327  446193 kic.go:203] duration metric: took 4.038914019s to extract preloaded images to volume ...
	W0924 18:37:49.934480  446193 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0924 18:37:49.934665  446193 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0924 18:37:49.987127  446193 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-783184 --name addons-783184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-783184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-783184 --network addons-783184 --ip 192.168.49.2 --volume addons-783184:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0924 18:37:50.322309  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Running}}
	I0924 18:37:50.342777  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:37:50.370750  446193 cli_runner.go:164] Run: docker exec addons-783184 stat /var/lib/dpkg/alternatives/iptables
	I0924 18:37:50.433771  446193 oci.go:144] the created container "addons-783184" has a running status.
	I0924 18:37:50.433816  446193 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa...
	I0924 18:37:50.982238  446193 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0924 18:37:51.010604  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:37:51.036651  446193 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0924 18:37:51.036674  446193 kic_runner.go:114] Args: [docker exec --privileged addons-783184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0924 18:37:51.116641  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:37:51.155609  446193 machine.go:93] provisionDockerMachine start ...
	I0924 18:37:51.155720  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:37:51.176620  446193 main.go:141] libmachine: Using SSH client type: native
	I0924 18:37:51.176885  446193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0924 18:37:51.176896  446193 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 18:37:51.329117  446193 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-783184
	
	I0924 18:37:51.329162  446193 ubuntu.go:169] provisioning hostname "addons-783184"
	I0924 18:37:51.329232  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:37:51.350487  446193 main.go:141] libmachine: Using SSH client type: native
	I0924 18:37:51.350736  446193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0924 18:37:51.350749  446193 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-783184 && echo "addons-783184" | sudo tee /etc/hostname
	I0924 18:37:51.498547  446193 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-783184
	
	I0924 18:37:51.498708  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:37:51.519251  446193 main.go:141] libmachine: Using SSH client type: native
	I0924 18:37:51.519492  446193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0924 18:37:51.519510  446193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-783184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-783184/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-783184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:37:51.657502  446193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:37:51.657531  446193 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19700-440051/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-440051/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-440051/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-440051/.minikube}
	I0924 18:37:51.657566  446193 ubuntu.go:177] setting up certificates
	I0924 18:37:51.657582  446193 provision.go:84] configureAuth start
	I0924 18:37:51.657648  446193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-783184
	I0924 18:37:51.676734  446193 provision.go:143] copyHostCerts
	I0924 18:37:51.676823  446193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-440051/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-440051/.minikube/ca.pem (1078 bytes)
	I0924 18:37:51.676941  446193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-440051/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-440051/.minikube/cert.pem (1123 bytes)
	I0924 18:37:51.677005  446193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-440051/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-440051/.minikube/key.pem (1675 bytes)
	I0924 18:37:51.677057  446193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-440051/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-440051/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-440051/.minikube/certs/ca-key.pem org=jenkins.addons-783184 san=[127.0.0.1 192.168.49.2 addons-783184 localhost minikube]
	I0924 18:37:51.873532  446193 provision.go:177] copyRemoteCerts
	I0924 18:37:51.873601  446193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:37:51.873672  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:37:51.892487  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:37:51.986140  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:37:52.017118  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 18:37:52.042101  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:37:52.068410  446193 provision.go:87] duration metric: took 410.808754ms to configureAuth
	I0924 18:37:52.068484  446193 ubuntu.go:193] setting minikube options for container-runtime
	I0924 18:37:52.068708  446193 config.go:182] Loaded profile config "addons-783184": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 18:37:52.068745  446193 machine.go:96] duration metric: took 913.111682ms to provisionDockerMachine
	I0924 18:37:52.068760  446193 client.go:171] duration metric: took 9.281940802s to LocalClient.Create
	I0924 18:37:52.068781  446193 start.go:167] duration metric: took 9.282011061s to libmachine.API.Create "addons-783184"
	I0924 18:37:52.068793  446193 start.go:293] postStartSetup for "addons-783184" (driver="docker")
	I0924 18:37:52.068806  446193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:37:52.068864  446193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:37:52.068917  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:37:52.085831  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:37:52.178713  446193 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:37:52.181923  446193 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0924 18:37:52.181962  446193 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0924 18:37:52.181974  446193 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0924 18:37:52.181981  446193 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0924 18:37:52.181991  446193 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-440051/.minikube/addons for local assets ...
	I0924 18:37:52.182062  446193 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-440051/.minikube/files for local assets ...
	I0924 18:37:52.182089  446193 start.go:296] duration metric: took 113.290123ms for postStartSetup
	I0924 18:37:52.182394  446193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-783184
	I0924 18:37:52.198230  446193 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/config.json ...
	I0924 18:37:52.198520  446193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 18:37:52.198572  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:37:52.214395  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:37:52.302271  446193 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0924 18:37:52.306482  446193 start.go:128] duration metric: took 9.523509703s to createHost
	I0924 18:37:52.306550  446193 start.go:83] releasing machines lock for "addons-783184", held for 9.523704655s
	I0924 18:37:52.306656  446193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-783184
	I0924 18:37:52.328247  446193 ssh_runner.go:195] Run: cat /version.json
	I0924 18:37:52.328268  446193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:37:52.328297  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:37:52.328327  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:37:52.347795  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:37:52.357642  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:37:52.578263  446193 ssh_runner.go:195] Run: systemctl --version
	I0924 18:37:52.582781  446193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0924 18:37:52.586953  446193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0924 18:37:52.612369  446193 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0924 18:37:52.612462  446193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:37:52.643806  446193 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0924 18:37:52.643835  446193 start.go:495] detecting cgroup driver to use...
	I0924 18:37:52.643869  446193 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0924 18:37:52.643956  446193 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0924 18:37:52.656531  446193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 18:37:52.668253  446193 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:37:52.668342  446193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:37:52.682892  446193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:37:52.698074  446193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:37:52.786866  446193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:37:52.881507  446193 docker.go:233] disabling docker service ...
	I0924 18:37:52.881625  446193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:37:52.902779  446193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:37:52.915128  446193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:37:53.026430  446193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:37:53.120360  446193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:37:53.131731  446193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:37:53.150560  446193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0924 18:37:53.160743  446193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0924 18:37:53.171180  446193 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0924 18:37:53.171353  446193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0924 18:37:53.181490  446193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 18:37:53.191619  446193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0924 18:37:53.201461  446193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 18:37:53.211468  446193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:37:53.220883  446193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0924 18:37:53.230944  446193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0924 18:37:53.240724  446193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0924 18:37:53.250751  446193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:37:53.259284  446193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:37:53.267854  446193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:37:53.348929  446193 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0924 18:37:53.468106  446193 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0924 18:37:53.468262  446193 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0924 18:37:53.471736  446193 start.go:563] Will wait 60s for crictl version
	I0924 18:37:53.471836  446193 ssh_runner.go:195] Run: which crictl
	I0924 18:37:53.475113  446193 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:37:53.509454  446193 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0924 18:37:53.509542  446193 ssh_runner.go:195] Run: containerd --version
	I0924 18:37:53.532930  446193 ssh_runner.go:195] Run: containerd --version
	I0924 18:37:53.559581  446193 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0924 18:37:53.561768  446193 cli_runner.go:164] Run: docker network inspect addons-783184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0924 18:37:53.577388  446193 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0924 18:37:53.580995  446193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:37:53.591893  446193 kubeadm.go:883] updating cluster {Name:addons-783184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-783184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:37:53.592028  446193 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 18:37:53.592093  446193 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:37:53.629823  446193 containerd.go:627] all images are preloaded for containerd runtime.
	I0924 18:37:53.629848  446193 containerd.go:534] Images already preloaded, skipping extraction
	I0924 18:37:53.629908  446193 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:37:53.665692  446193 containerd.go:627] all images are preloaded for containerd runtime.
	I0924 18:37:53.665718  446193 cache_images.go:84] Images are preloaded, skipping loading
	I0924 18:37:53.665726  446193 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0924 18:37:53.665832  446193 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-783184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-783184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:37:53.665907  446193 ssh_runner.go:195] Run: sudo crictl info
	I0924 18:37:53.705817  446193 cni.go:84] Creating CNI manager for ""
	I0924 18:37:53.705841  446193 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 18:37:53.705852  446193 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:37:53.705897  446193 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-783184 NodeName:addons-783184 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:37:53.706064  446193 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-783184"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:37:53.706136  446193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:37:53.714903  446193 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 18:37:53.714974  446193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 18:37:53.723671  446193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 18:37:53.741692  446193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:37:53.759777  446193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0924 18:37:53.777981  446193 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0924 18:37:53.781585  446193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:37:53.792964  446193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:37:53.881423  446193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:37:53.897886  446193 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184 for IP: 192.168.49.2
	I0924 18:37:53.897952  446193 certs.go:194] generating shared ca certs ...
	I0924 18:37:53.897983  446193 certs.go:226] acquiring lock for ca certs: {Name:mkc62fc2ccc794a5cedf26dce205cca22588d47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:53.898141  446193 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-440051/.minikube/ca.key
	I0924 18:37:54.335338  446193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-440051/.minikube/ca.crt ...
	I0924 18:37:54.335375  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/ca.crt: {Name:mk41de5a40c957082b07c5e5522192ac32df909d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:54.335586  446193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-440051/.minikube/ca.key ...
	I0924 18:37:54.335599  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/ca.key: {Name:mkb588b997c3c9a57e7cc494e0ad159c75d24af0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:54.335692  446193 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-440051/.minikube/proxy-client-ca.key
	I0924 18:37:54.796880  446193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-440051/.minikube/proxy-client-ca.crt ...
	I0924 18:37:54.796911  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/proxy-client-ca.crt: {Name:mk1db8c85b7ac3320f9b9b4fcd1f8947f353e327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:54.797088  446193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-440051/.minikube/proxy-client-ca.key ...
	I0924 18:37:54.797103  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/proxy-client-ca.key: {Name:mk1a6ad1d2f5bce860bc93f13140c07a4f3de5eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:54.797173  446193 certs.go:256] generating profile certs ...
	I0924 18:37:54.797244  446193 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.key
	I0924 18:37:54.797261  446193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt with IP's: []
	I0924 18:37:56.367549  446193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt ...
	I0924 18:37:56.367586  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: {Name:mkdcc7de2e953da8c89df3c2033a9066da166201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:56.367786  446193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.key ...
	I0924 18:37:56.367801  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.key: {Name:mk5d7f71748d3a45be05619139eebe49642a451a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:56.367878  446193 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.key.5eecb0aa
	I0924 18:37:56.367902  446193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.crt.5eecb0aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0924 18:37:56.915365  446193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.crt.5eecb0aa ...
	I0924 18:37:56.915400  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.crt.5eecb0aa: {Name:mk0669b5bf070621dc3af0dc8327722c9fd3c744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:56.915594  446193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.key.5eecb0aa ...
	I0924 18:37:56.915610  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.key.5eecb0aa: {Name:mk5d8f1af1915128105ededb4d982600cd8a963c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:56.916207  446193 certs.go:381] copying /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.crt.5eecb0aa -> /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.crt
	I0924 18:37:56.916304  446193 certs.go:385] copying /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.key.5eecb0aa -> /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.key
	I0924 18:37:56.916377  446193 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/proxy-client.key
	I0924 18:37:56.916398  446193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/proxy-client.crt with IP's: []
	I0924 18:37:57.205286  446193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/proxy-client.crt ...
	I0924 18:37:57.205322  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/proxy-client.crt: {Name:mkdc508fabb98207c0a60b399f671ec05dbacdc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:57.205950  446193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/proxy-client.key ...
	I0924 18:37:57.205970  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/proxy-client.key: {Name:mk164ff26e3b33c126e6445d21c7e59a6f40a51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:37:57.206726  446193 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-440051/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:37:57.206773  446193 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-440051/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:37:57.206804  446193 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-440051/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:37:57.206833  446193 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-440051/.minikube/certs/key.pem (1675 bytes)
	I0924 18:37:57.207440  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:37:57.233000  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:37:57.258470  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:37:57.283092  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:37:57.307670  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0924 18:37:57.331734  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 18:37:57.356034  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:37:57.380252  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 18:37:57.404512  446193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-440051/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:37:57.429271  446193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:37:57.448221  446193 ssh_runner.go:195] Run: openssl version
	I0924 18:37:57.454040  446193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:37:57.464064  446193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:37:57.467742  446193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:37:57.467819  446193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:37:57.474675  446193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:37:57.484130  446193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:37:57.487715  446193 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:37:57.487762  446193 kubeadm.go:392] StartCluster: {Name:addons-783184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-783184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:37:57.487843  446193 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0924 18:37:57.487905  446193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 18:37:57.523585  446193 cri.go:89] found id: ""
	I0924 18:37:57.523666  446193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 18:37:57.532507  446193 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 18:37:57.541457  446193 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0924 18:37:57.541566  446193 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 18:37:57.550523  446193 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 18:37:57.550586  446193 kubeadm.go:157] found existing configuration files:
	
	I0924 18:37:57.550649  446193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 18:37:57.559219  446193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 18:37:57.559291  446193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 18:37:57.567897  446193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 18:37:57.576551  446193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 18:37:57.576633  446193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 18:37:57.584907  446193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 18:37:57.593456  446193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 18:37:57.593546  446193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 18:37:57.602011  446193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 18:37:57.611316  446193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 18:37:57.611448  446193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 18:37:57.621306  446193 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0924 18:37:57.657513  446193 kubeadm.go:310] W0924 18:37:57.656747    1036 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:37:57.658465  446193 kubeadm.go:310] W0924 18:37:57.657938    1036 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:37:57.687685  446193 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0924 18:37:57.759027  446193 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 18:38:13.481260  446193 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 18:38:13.481321  446193 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 18:38:13.481433  446193 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0924 18:38:13.481502  446193 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0924 18:38:13.481542  446193 kubeadm.go:310] OS: Linux
	I0924 18:38:13.481614  446193 kubeadm.go:310] CGROUPS_CPU: enabled
	I0924 18:38:13.481674  446193 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0924 18:38:13.481732  446193 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0924 18:38:13.481794  446193 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0924 18:38:13.481851  446193 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0924 18:38:13.481902  446193 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0924 18:38:13.481953  446193 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0924 18:38:13.482007  446193 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0924 18:38:13.482058  446193 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0924 18:38:13.482131  446193 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 18:38:13.482227  446193 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 18:38:13.482318  446193 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 18:38:13.482383  446193 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 18:38:13.484786  446193 out.go:235]   - Generating certificates and keys ...
	I0924 18:38:13.484878  446193 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 18:38:13.484945  446193 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 18:38:13.485014  446193 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 18:38:13.485078  446193 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 18:38:13.485144  446193 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 18:38:13.485197  446193 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 18:38:13.485253  446193 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 18:38:13.485371  446193 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-783184 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0924 18:38:13.485454  446193 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 18:38:13.485572  446193 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-783184 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0924 18:38:13.485640  446193 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 18:38:13.485706  446193 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 18:38:13.485753  446193 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 18:38:13.485818  446193 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 18:38:13.485872  446193 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 18:38:13.485931  446193 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 18:38:13.485995  446193 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 18:38:13.486064  446193 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 18:38:13.486120  446193 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 18:38:13.486204  446193 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 18:38:13.486273  446193 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 18:38:13.488125  446193 out.go:235]   - Booting up control plane ...
	I0924 18:38:13.488272  446193 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 18:38:13.488390  446193 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 18:38:13.488489  446193 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 18:38:13.488606  446193 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 18:38:13.488705  446193 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 18:38:13.488770  446193 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 18:38:13.488925  446193 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 18:38:13.489085  446193 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 18:38:13.489172  446193 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001854504s
	I0924 18:38:13.489293  446193 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 18:38:13.489375  446193 kubeadm.go:310] [api-check] The API server is healthy after 6.001679926s
	I0924 18:38:13.489549  446193 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 18:38:13.489720  446193 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 18:38:13.489802  446193 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 18:38:13.490027  446193 kubeadm.go:310] [mark-control-plane] Marking the node addons-783184 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 18:38:13.490097  446193 kubeadm.go:310] [bootstrap-token] Using token: 4sc01p.7p933z1vnxse3mtp
	I0924 18:38:13.492358  446193 out.go:235]   - Configuring RBAC rules ...
	I0924 18:38:13.492507  446193 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 18:38:13.492603  446193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 18:38:13.492755  446193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 18:38:13.492918  446193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 18:38:13.493045  446193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 18:38:13.493138  446193 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 18:38:13.493267  446193 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 18:38:13.493318  446193 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 18:38:13.493389  446193 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 18:38:13.493419  446193 kubeadm.go:310] 
	I0924 18:38:13.493574  446193 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 18:38:13.493590  446193 kubeadm.go:310] 
	I0924 18:38:13.493733  446193 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 18:38:13.493745  446193 kubeadm.go:310] 
	I0924 18:38:13.493775  446193 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 18:38:13.493843  446193 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 18:38:13.493922  446193 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 18:38:13.493933  446193 kubeadm.go:310] 
	I0924 18:38:13.494005  446193 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 18:38:13.494017  446193 kubeadm.go:310] 
	I0924 18:38:13.494078  446193 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 18:38:13.494086  446193 kubeadm.go:310] 
	I0924 18:38:13.494162  446193 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 18:38:13.494257  446193 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 18:38:13.494326  446193 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 18:38:13.494331  446193 kubeadm.go:310] 
	I0924 18:38:13.494414  446193 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 18:38:13.494489  446193 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 18:38:13.494498  446193 kubeadm.go:310] 
	I0924 18:38:13.494582  446193 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4sc01p.7p933z1vnxse3mtp \
	I0924 18:38:13.494682  446193 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a38b3e34527ef5dc2482a099948c7069d28274afedb4ce8d0d61d9acee11985 \
	I0924 18:38:13.494705  446193 kubeadm.go:310] 	--control-plane 
	I0924 18:38:13.494709  446193 kubeadm.go:310] 
	I0924 18:38:13.494792  446193 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 18:38:13.494797  446193 kubeadm.go:310] 
	I0924 18:38:13.494879  446193 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4sc01p.7p933z1vnxse3mtp \
	I0924 18:38:13.494995  446193 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a38b3e34527ef5dc2482a099948c7069d28274afedb4ce8d0d61d9acee11985 
	I0924 18:38:13.495003  446193 cni.go:84] Creating CNI manager for ""
	I0924 18:38:13.495011  446193 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 18:38:13.497203  446193 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0924 18:38:13.499681  446193 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0924 18:38:13.504032  446193 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0924 18:38:13.504054  446193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0924 18:38:13.526947  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0924 18:38:13.799682  446193 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 18:38:13.799828  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:38:13.799915  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-783184 minikube.k8s.io/updated_at=2024_09_24T18_38_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=addons-783184 minikube.k8s.io/primary=true
	I0924 18:38:13.930773  446193 ops.go:34] apiserver oom_adj: -16
	I0924 18:38:13.930915  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:38:14.431513  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:38:14.931750  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:38:15.431385  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:38:15.931003  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:38:16.431782  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:38:16.931345  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:38:17.431242  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:38:17.931104  446193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:38:18.050963  446193 kubeadm.go:1113] duration metric: took 4.251182117s to wait for elevateKubeSystemPrivileges
	I0924 18:38:18.050999  446193 kubeadm.go:394] duration metric: took 20.563240059s to StartCluster
	I0924 18:38:18.051018  446193 settings.go:142] acquiring lock: {Name:mk3474ab2e32bda4466d93100746590bcb646da8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:38:18.051155  446193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-440051/kubeconfig
	I0924 18:38:18.051569  446193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-440051/kubeconfig: {Name:mkbcffc23d82b4297a6a95e203a67e556e7df2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:38:18.051812  446193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 18:38:18.051829  446193 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0924 18:38:18.052125  446193 config.go:182] Loaded profile config "addons-783184": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 18:38:18.052170  446193 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0924 18:38:18.052248  446193 addons.go:69] Setting yakd=true in profile "addons-783184"
	I0924 18:38:18.052263  446193 addons.go:234] Setting addon yakd=true in "addons-783184"
	I0924 18:38:18.052295  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.052787  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.053377  446193 addons.go:69] Setting metrics-server=true in profile "addons-783184"
	I0924 18:38:18.053436  446193 addons.go:234] Setting addon metrics-server=true in "addons-783184"
	I0924 18:38:18.053470  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.053943  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.054213  446193 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-783184"
	I0924 18:38:18.054236  446193 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-783184"
	I0924 18:38:18.054263  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.054705  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.057181  446193 addons.go:69] Setting registry=true in profile "addons-783184"
	I0924 18:38:18.057228  446193 addons.go:234] Setting addon registry=true in "addons-783184"
	I0924 18:38:18.057300  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.057910  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.063173  446193 addons.go:69] Setting cloud-spanner=true in profile "addons-783184"
	I0924 18:38:18.063378  446193 addons.go:234] Setting addon cloud-spanner=true in "addons-783184"
	I0924 18:38:18.063536  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.064345  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.073066  446193 addons.go:69] Setting storage-provisioner=true in profile "addons-783184"
	I0924 18:38:18.073122  446193 addons.go:234] Setting addon storage-provisioner=true in "addons-783184"
	I0924 18:38:18.073177  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.073540  446193 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-783184"
	I0924 18:38:18.073632  446193 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-783184"
	I0924 18:38:18.073679  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.074243  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.076546  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.089736  446193 addons.go:69] Setting default-storageclass=true in profile "addons-783184"
	I0924 18:38:18.089781  446193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-783184"
	I0924 18:38:18.090544  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.093322  446193 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-783184"
	I0924 18:38:18.093358  446193 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-783184"
	I0924 18:38:18.093912  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.112184  446193 addons.go:69] Setting gcp-auth=true in profile "addons-783184"
	I0924 18:38:18.112219  446193 mustload.go:65] Loading cluster: addons-783184
	I0924 18:38:18.112434  446193 config.go:182] Loaded profile config "addons-783184": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 18:38:18.113743  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.129978  446193 addons.go:69] Setting ingress=true in profile "addons-783184"
	I0924 18:38:18.130083  446193 addons.go:234] Setting addon ingress=true in "addons-783184"
	I0924 18:38:18.130151  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.130802  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.133462  446193 addons.go:69] Setting volcano=true in profile "addons-783184"
	I0924 18:38:18.133551  446193 addons.go:234] Setting addon volcano=true in "addons-783184"
	I0924 18:38:18.133633  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.134238  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.150232  446193 addons.go:69] Setting ingress-dns=true in profile "addons-783184"
	I0924 18:38:18.150266  446193 addons.go:234] Setting addon ingress-dns=true in "addons-783184"
	I0924 18:38:18.150314  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.150843  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.157479  446193 addons.go:69] Setting volumesnapshots=true in profile "addons-783184"
	I0924 18:38:18.157567  446193 addons.go:234] Setting addon volumesnapshots=true in "addons-783184"
	I0924 18:38:18.157646  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.158183  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.176760  446193 addons.go:69] Setting inspektor-gadget=true in profile "addons-783184"
	I0924 18:38:18.176797  446193 addons.go:234] Setting addon inspektor-gadget=true in "addons-783184"
	I0924 18:38:18.176839  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.177342  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.180724  446193 out.go:177] * Verifying Kubernetes components...
	I0924 18:38:18.183097  446193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:38:18.284017  446193 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0924 18:38:18.294591  446193 out.go:177]   - Using image docker.io/registry:2.8.3
	I0924 18:38:18.295719  446193 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0924 18:38:18.295766  446193 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0924 18:38:18.295843  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.302727  446193 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0924 18:38:18.304519  446193 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 18:38:18.304543  446193 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 18:38:18.304616  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.336232  446193 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0924 18:38:18.338047  446193 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0924 18:38:18.338072  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0924 18:38:18.338141  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.338715  446193 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0924 18:38:18.344921  446193 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0924 18:38:18.344955  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0924 18:38:18.345040  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.353513  446193 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0924 18:38:18.357311  446193 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:38:18.357337  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0924 18:38:18.357487  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.371594  446193 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-783184"
	I0924 18:38:18.371694  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.372195  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.396903  446193 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 18:38:18.399083  446193 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0924 18:38:18.399704  446193 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:38:18.399731  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 18:38:18.399797  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.422410  446193 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 18:38:18.422431  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0924 18:38:18.422538  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.424112  446193 addons.go:234] Setting addon default-storageclass=true in "addons-783184"
	I0924 18:38:18.424151  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.429847  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:18.434845  446193 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0924 18:38:18.440981  446193 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0924 18:38:18.442996  446193 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0924 18:38:18.443156  446193 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0924 18:38:18.445518  446193 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0924 18:38:18.450878  446193 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 18:38:18.450914  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0924 18:38:18.451010  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.451225  446193 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0924 18:38:18.453030  446193 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0924 18:38:18.458180  446193 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0924 18:38:18.460206  446193 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0924 18:38:18.483663  446193 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0924 18:38:18.489499  446193 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0924 18:38:18.489636  446193 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0924 18:38:18.489664  446193 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0924 18:38:18.489766  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.491466  446193 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0924 18:38:18.491492  446193 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0924 18:38:18.491576  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.522732  446193 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:38:18.525692  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:18.527325  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.537337  446193 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0924 18:38:18.539497  446193 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0924 18:38:18.549975  446193 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:38:18.552645  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.585763  446193 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 18:38:18.585791  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0924 18:38:18.585863  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.586015  446193 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0924 18:38:18.590179  446193 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0924 18:38:18.590216  446193 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0924 18:38:18.590308  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.612108  446193 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0924 18:38:18.615602  446193 out.go:177]   - Using image docker.io/busybox:stable
	I0924 18:38:18.622765  446193 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:38:18.622860  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0924 18:38:18.622989  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.655224  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.656482  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.677802  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.679057  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.679385  446193 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 18:38:18.679395  446193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 18:38:18.679444  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:18.701159  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.753759  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.766413  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.785778  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.790733  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.806397  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.808271  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:18.812582  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	W0924 18:38:18.812788  446193 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0924 18:38:18.812828  446193 retry.go:31] will retry after 133.271978ms: ssh: handshake failed: EOF
	W0924 18:38:18.816825  446193 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0924 18:38:18.816853  446193 retry.go:31] will retry after 219.699123ms: ssh: handshake failed: EOF
	I0924 18:38:19.322070  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0924 18:38:19.404926  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:38:19.466387  446193 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0924 18:38:19.466469  446193 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0924 18:38:19.488484  446193 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0924 18:38:19.488556  446193 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0924 18:38:19.499520  446193 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 18:38:19.499596  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0924 18:38:19.528891  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 18:38:19.597690  446193 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0924 18:38:19.597760  446193 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0924 18:38:19.602774  446193 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.419599451s)
	I0924 18:38:19.602877  446193 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.551042016s)
	I0924 18:38:19.603057  446193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 18:38:19.603213  446193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:38:19.603753  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:38:19.641660  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:38:19.660095  446193 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0924 18:38:19.660126  446193 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0924 18:38:19.689078  446193 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0924 18:38:19.689106  446193 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0924 18:38:19.706343  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 18:38:19.743414  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 18:38:19.792959  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 18:38:19.836887  446193 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0924 18:38:19.836913  446193 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0924 18:38:19.840649  446193 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:38:19.840674  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0924 18:38:19.965626  446193 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 18:38:19.965655  446193 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 18:38:19.985499  446193 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0924 18:38:19.985528  446193 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0924 18:38:19.989273  446193 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0924 18:38:19.989298  446193 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0924 18:38:20.033337  446193 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0924 18:38:20.033367  446193 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0924 18:38:20.160613  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:38:20.172305  446193 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0924 18:38:20.172332  446193 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0924 18:38:20.283647  446193 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0924 18:38:20.283679  446193 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0924 18:38:20.294810  446193 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:38:20.294838  446193 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 18:38:20.295100  446193 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0924 18:38:20.295113  446193 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0924 18:38:20.331019  446193 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0924 18:38:20.331051  446193 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0924 18:38:20.440281  446193 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0924 18:38:20.440306  446193 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0924 18:38:20.480767  446193 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0924 18:38:20.480794  446193 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0924 18:38:20.490382  446193 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0924 18:38:20.490406  446193 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0924 18:38:20.573636  446193 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0924 18:38:20.573662  446193 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0924 18:38:20.594892  446193 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:38:20.594916  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0924 18:38:20.667754  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:38:20.775653  446193 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0924 18:38:20.775684  446193 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0924 18:38:20.783772  446193 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0924 18:38:20.783802  446193 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0924 18:38:20.874935  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:38:20.887244  446193 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:38:20.887272  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0924 18:38:21.150195  446193 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:38:21.150220  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0924 18:38:21.240461  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:38:21.294016  446193 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0924 18:38:21.294045  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0924 18:38:21.314139  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:38:21.465759  446193 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0924 18:38:21.465786  446193 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0924 18:38:21.627574  446193 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0924 18:38:21.627600  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0924 18:38:21.801258  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.479143944s)
	I0924 18:38:22.087401  446193 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0924 18:38:22.087426  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0924 18:38:22.244999  446193 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:38:22.245025  446193 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0924 18:38:22.566226  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:38:23.352212  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.823230777s)
	I0924 18:38:23.352289  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.748515789s)
	I0924 18:38:23.352325  446193 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.749095581s)
	I0924 18:38:23.353186  446193 node_ready.go:35] waiting up to 6m0s for node "addons-783184" to be "Ready" ...
	I0924 18:38:23.353451  446193 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.7503725s)
	I0924 18:38:23.353486  446193 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0924 18:38:23.354588  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.949585165s)
	I0924 18:38:23.361707  446193 node_ready.go:49] node "addons-783184" has status "Ready":"True"
	I0924 18:38:23.361732  446193 node_ready.go:38] duration metric: took 8.517694ms for node "addons-783184" to be "Ready" ...
	I0924 18:38:23.361742  446193 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:38:23.383385  446193 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdnmt" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:23.565677  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.923977378s)
	I0924 18:38:23.856893  446193 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-783184" context rescaled to 1 replicas
	I0924 18:38:24.387100  446193 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-hdnmt" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hdnmt" not found
	I0924 18:38:24.387128  446193 pod_ready.go:82] duration metric: took 1.003702312s for pod "coredns-7c65d6cfc9-hdnmt" in "kube-system" namespace to be "Ready" ...
	E0924 18:38:24.387139  446193 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-hdnmt" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hdnmt" not found
	I0924 18:38:24.387146  446193 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l42hj" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:25.759194  446193 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0924 18:38:25.759322  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:25.796643  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:26.410261  446193 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0924 18:38:26.415121  446193 pod_ready.go:103] pod "coredns-7c65d6cfc9-l42hj" in "kube-system" namespace has status "Ready":"False"
	I0924 18:38:26.502817  446193 addons.go:234] Setting addon gcp-auth=true in "addons-783184"
	I0924 18:38:26.502925  446193 host.go:66] Checking if "addons-783184" exists ...
	I0924 18:38:26.503489  446193 cli_runner.go:164] Run: docker container inspect addons-783184 --format={{.State.Status}}
	I0924 18:38:26.536384  446193 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0924 18:38:26.536454  446193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783184
	I0924 18:38:26.563561  446193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/addons-783184/id_rsa Username:docker}
	I0924 18:38:26.587123  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.880728613s)
	I0924 18:38:26.587158  446193 addons.go:475] Verifying addon ingress=true in "addons-783184"
	I0924 18:38:26.589028  446193 out.go:177] * Verifying ingress addon...
	I0924 18:38:26.591674  446193 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0924 18:38:26.596967  446193 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0924 18:38:26.596993  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:27.096838  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:27.608884  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:28.129581  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:28.439263  446193 pod_ready.go:103] pod "coredns-7c65d6cfc9-l42hj" in "kube-system" namespace has status "Ready":"False"
	I0924 18:38:28.604725  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:28.839985  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.046967491s)
	I0924 18:38:28.840084  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.679438277s)
	I0924 18:38:28.840606  446193 addons.go:475] Verifying addon registry=true in "addons-783184"
	I0924 18:38:28.840159  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.172376395s)
	I0924 18:38:28.841044  446193 addons.go:475] Verifying addon metrics-server=true in "addons-783184"
	I0924 18:38:28.840190  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.965228404s)
	I0924 18:38:28.840273  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.599784201s)
	W0924 18:38:28.841224  446193 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:38:28.841277  446193 retry.go:31] will retry after 369.079351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:38:28.840327  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.52616223s)
	I0924 18:38:28.840436  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.096996993s)
	I0924 18:38:28.843462  446193 out.go:177] * Verifying registry addon...
	I0924 18:38:28.847345  446193 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-783184 service yakd-dashboard -n yakd-dashboard
	
	I0924 18:38:28.849988  446193 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0924 18:38:28.936096  446193 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0924 18:38:28.936120  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:29.104990  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:29.211405  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:38:29.366853  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:29.588731  446193 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.052316907s)
	I0924 18:38:29.588996  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.022729445s)
	I0924 18:38:29.589058  446193 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-783184"
	I0924 18:38:29.592068  446193 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:38:29.592229  446193 out.go:177] * Verifying csi-hostpath-driver addon...
	I0924 18:38:29.596293  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:29.596443  446193 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0924 18:38:29.597603  446193 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0924 18:38:29.598871  446193 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0924 18:38:29.598917  446193 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0924 18:38:29.602580  446193 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0924 18:38:29.602643  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:29.629186  446193 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0924 18:38:29.629263  446193 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0924 18:38:29.703519  446193 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:38:29.703543  446193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0924 18:38:29.744795  446193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:38:29.854870  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:30.114824  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:30.115181  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:30.356005  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:30.597576  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:30.602601  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:30.862567  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:30.894601  446193 pod_ready.go:103] pod "coredns-7c65d6cfc9-l42hj" in "kube-system" namespace has status "Ready":"False"
	I0924 18:38:30.938883  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.727385459s)
	I0924 18:38:30.938976  446193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.194109591s)
	I0924 18:38:30.943455  446193 addons.go:475] Verifying addon gcp-auth=true in "addons-783184"
	I0924 18:38:30.948128  446193 out.go:177] * Verifying gcp-auth addon...
	I0924 18:38:30.952207  446193 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0924 18:38:30.958465  446193 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 18:38:31.097297  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:31.103603  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:31.359490  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:31.596058  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:31.602328  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:31.854388  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:32.097059  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:32.102755  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:32.354057  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:32.596353  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:32.602801  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:32.855349  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:32.895138  446193 pod_ready.go:103] pod "coredns-7c65d6cfc9-l42hj" in "kube-system" namespace has status "Ready":"False"
	I0924 18:38:33.098998  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:33.103922  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:33.353909  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:33.598219  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:33.603500  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:33.855308  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:34.097498  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:34.103430  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:34.356459  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:34.597195  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:34.602361  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:34.854834  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:35.102032  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:35.105747  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:35.361232  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:35.393117  446193 pod_ready.go:103] pod "coredns-7c65d6cfc9-l42hj" in "kube-system" namespace has status "Ready":"False"
	I0924 18:38:35.596201  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:35.603450  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:35.854756  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:36.099397  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:36.105673  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:36.359230  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:36.399959  446193 pod_ready.go:93] pod "coredns-7c65d6cfc9-l42hj" in "kube-system" namespace has status "Ready":"True"
	I0924 18:38:36.400063  446193 pod_ready.go:82] duration metric: took 12.012904746s for pod "coredns-7c65d6cfc9-l42hj" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.400099  446193 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-783184" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.405857  446193 pod_ready.go:93] pod "etcd-addons-783184" in "kube-system" namespace has status "Ready":"True"
	I0924 18:38:36.405941  446193 pod_ready.go:82] duration metric: took 5.786948ms for pod "etcd-addons-783184" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.405972  446193 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-783184" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.412132  446193 pod_ready.go:93] pod "kube-apiserver-addons-783184" in "kube-system" namespace has status "Ready":"True"
	I0924 18:38:36.412207  446193 pod_ready.go:82] duration metric: took 6.186509ms for pod "kube-apiserver-addons-783184" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.412234  446193 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-783184" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.420313  446193 pod_ready.go:93] pod "kube-controller-manager-addons-783184" in "kube-system" namespace has status "Ready":"True"
	I0924 18:38:36.420389  446193 pod_ready.go:82] duration metric: took 8.133199ms for pod "kube-controller-manager-addons-783184" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.420421  446193 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7c74q" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.427913  446193 pod_ready.go:93] pod "kube-proxy-7c74q" in "kube-system" namespace has status "Ready":"True"
	I0924 18:38:36.427987  446193 pod_ready.go:82] duration metric: took 7.542772ms for pod "kube-proxy-7c74q" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.428014  446193 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-783184" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.596636  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:36.601928  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:36.792497  446193 pod_ready.go:93] pod "kube-scheduler-addons-783184" in "kube-system" namespace has status "Ready":"True"
	I0924 18:38:36.792523  446193 pod_ready.go:82] duration metric: took 364.486674ms for pod "kube-scheduler-addons-783184" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.792536  446193 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6tcpv" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:36.855157  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:37.097426  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:37.104299  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:37.355386  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:37.595988  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:37.602681  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:37.854225  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:38.096976  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:38.103238  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:38.355738  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:38.596992  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:38.602300  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:38.801775  446193 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6tcpv" in "kube-system" namespace has status "Ready":"False"
	I0924 18:38:38.854096  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:39.096923  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:39.103124  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:39.354286  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:39.596839  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:39.602720  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:39.853930  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:40.097247  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:40.103352  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:40.355642  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:40.600751  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:40.604690  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:40.854645  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:41.097528  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:41.102546  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:41.301835  446193 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6tcpv" in "kube-system" namespace has status "Ready":"False"
	I0924 18:38:41.355354  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:41.597203  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:41.603600  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:41.855009  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:42.101588  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:42.108006  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:42.354808  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:42.597358  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:42.602556  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:42.855079  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:43.096538  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:43.105501  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:43.357313  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:43.596163  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:43.602566  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:43.799995  446193 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6tcpv" in "kube-system" namespace has status "Ready":"True"
	I0924 18:38:43.800022  446193 pod_ready.go:82] duration metric: took 7.007477608s for pod "nvidia-device-plugin-daemonset-6tcpv" in "kube-system" namespace to be "Ready" ...
	I0924 18:38:43.800062  446193 pod_ready.go:39] duration metric: took 20.438308406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:38:43.800082  446193 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:38:43.800163  446193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:38:43.817883  446193 api_server.go:72] duration metric: took 25.766019009s to wait for apiserver process to appear ...
	I0924 18:38:43.817915  446193 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:38:43.817941  446193 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0924 18:38:43.826075  446193 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0924 18:38:43.835074  446193 api_server.go:141] control plane version: v1.31.1
	I0924 18:38:43.835168  446193 api_server.go:131] duration metric: took 17.244893ms to wait for apiserver health ...
	I0924 18:38:43.835194  446193 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:38:43.845077  446193 system_pods.go:59] 18 kube-system pods found
	I0924 18:38:43.845119  446193 system_pods.go:61] "coredns-7c65d6cfc9-l42hj" [09ed7be8-1829-4437-9ee1-8803951fbc2c] Running
	I0924 18:38:43.845132  446193 system_pods.go:61] "csi-hostpath-attacher-0" [c9d9e766-8222-4ae8-9f19-3aaf9cc28930] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:38:43.845142  446193 system_pods.go:61] "csi-hostpath-resizer-0" [4e9ee53c-9f05-4d78-a286-c4de5582a229] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:38:43.845150  446193 system_pods.go:61] "csi-hostpathplugin-qwhxj" [8ee4eb19-91c4-4897-ba2f-008371920f92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:38:43.845155  446193 system_pods.go:61] "etcd-addons-783184" [529e0be0-53fa-4754-83e1-f0955812b01c] Running
	I0924 18:38:43.845160  446193 system_pods.go:61] "kindnet-x95vn" [de1e6c42-a0cb-4df2-a122-645dd46c7609] Running
	I0924 18:38:43.845165  446193 system_pods.go:61] "kube-apiserver-addons-783184" [9f99bb5b-e990-456c-a11c-15e9259c91bc] Running
	I0924 18:38:43.845169  446193 system_pods.go:61] "kube-controller-manager-addons-783184" [50053294-5281-4231-8a87-9a45ccb11306] Running
	I0924 18:38:43.845180  446193 system_pods.go:61] "kube-ingress-dns-minikube" [34b8e021-cf78-4b0d-9122-3394e01628f6] Running
	I0924 18:38:43.845184  446193 system_pods.go:61] "kube-proxy-7c74q" [7ac78a01-1025-4052-a0db-da158174201c] Running
	I0924 18:38:43.845198  446193 system_pods.go:61] "kube-scheduler-addons-783184" [4c7f3d22-6ed8-4c4c-a95e-078ef4996e36] Running
	I0924 18:38:43.845204  446193 system_pods.go:61] "metrics-server-84c5f94fbc-4ckmr" [2612b668-6d6b-42bf-983a-8f4e0af581d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 18:38:43.845208  446193 system_pods.go:61] "nvidia-device-plugin-daemonset-6tcpv" [c9ba084e-6cd2-428d-82c2-c10dd5f9d5d5] Running
	I0924 18:38:43.845221  446193 system_pods.go:61] "registry-66c9cd494c-r9m2n" [4a0f5031-6fb0-4e60-83fd-6ae70f4d567b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0924 18:38:43.845227  446193 system_pods.go:61] "registry-proxy-k6vdg" [2bbc91c6-09cc-4e9b-a450-9dde1f42116b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0924 18:38:43.845235  446193 system_pods.go:61] "snapshot-controller-56fcc65765-t5bh5" [87dca475-4a57-4913-8bd1-bb2cd203503d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:38:43.845245  446193 system_pods.go:61] "snapshot-controller-56fcc65765-tvr46" [75a4b614-d5ee-422a-afef-2b72b451299e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:38:43.845249  446193 system_pods.go:61] "storage-provisioner" [4cda26e0-32af-4b06-9586-11f4e9e3f7d3] Running
	I0924 18:38:43.845256  446193 system_pods.go:74] duration metric: took 10.029165ms to wait for pod list to return data ...
	I0924 18:38:43.845264  446193 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:38:43.848805  446193 default_sa.go:45] found service account: "default"
	I0924 18:38:43.848841  446193 default_sa.go:55] duration metric: took 3.571257ms for default service account to be created ...
	I0924 18:38:43.848851  446193 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:38:43.855460  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:43.862299  446193 system_pods.go:86] 18 kube-system pods found
	I0924 18:38:43.862345  446193 system_pods.go:89] "coredns-7c65d6cfc9-l42hj" [09ed7be8-1829-4437-9ee1-8803951fbc2c] Running
	I0924 18:38:43.862361  446193 system_pods.go:89] "csi-hostpath-attacher-0" [c9d9e766-8222-4ae8-9f19-3aaf9cc28930] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:38:43.862369  446193 system_pods.go:89] "csi-hostpath-resizer-0" [4e9ee53c-9f05-4d78-a286-c4de5582a229] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:38:43.862378  446193 system_pods.go:89] "csi-hostpathplugin-qwhxj" [8ee4eb19-91c4-4897-ba2f-008371920f92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:38:43.862423  446193 system_pods.go:89] "etcd-addons-783184" [529e0be0-53fa-4754-83e1-f0955812b01c] Running
	I0924 18:38:43.862436  446193 system_pods.go:89] "kindnet-x95vn" [de1e6c42-a0cb-4df2-a122-645dd46c7609] Running
	I0924 18:38:43.862442  446193 system_pods.go:89] "kube-apiserver-addons-783184" [9f99bb5b-e990-456c-a11c-15e9259c91bc] Running
	I0924 18:38:43.862447  446193 system_pods.go:89] "kube-controller-manager-addons-783184" [50053294-5281-4231-8a87-9a45ccb11306] Running
	I0924 18:38:43.862460  446193 system_pods.go:89] "kube-ingress-dns-minikube" [34b8e021-cf78-4b0d-9122-3394e01628f6] Running
	I0924 18:38:43.862465  446193 system_pods.go:89] "kube-proxy-7c74q" [7ac78a01-1025-4052-a0db-da158174201c] Running
	I0924 18:38:43.862469  446193 system_pods.go:89] "kube-scheduler-addons-783184" [4c7f3d22-6ed8-4c4c-a95e-078ef4996e36] Running
	I0924 18:38:43.862475  446193 system_pods.go:89] "metrics-server-84c5f94fbc-4ckmr" [2612b668-6d6b-42bf-983a-8f4e0af581d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 18:38:43.862481  446193 system_pods.go:89] "nvidia-device-plugin-daemonset-6tcpv" [c9ba084e-6cd2-428d-82c2-c10dd5f9d5d5] Running
	I0924 18:38:43.862491  446193 system_pods.go:89] "registry-66c9cd494c-r9m2n" [4a0f5031-6fb0-4e60-83fd-6ae70f4d567b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0924 18:38:43.862499  446193 system_pods.go:89] "registry-proxy-k6vdg" [2bbc91c6-09cc-4e9b-a450-9dde1f42116b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0924 18:38:43.862510  446193 system_pods.go:89] "snapshot-controller-56fcc65765-t5bh5" [87dca475-4a57-4913-8bd1-bb2cd203503d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:38:43.862518  446193 system_pods.go:89] "snapshot-controller-56fcc65765-tvr46" [75a4b614-d5ee-422a-afef-2b72b451299e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:38:43.862527  446193 system_pods.go:89] "storage-provisioner" [4cda26e0-32af-4b06-9586-11f4e9e3f7d3] Running
	I0924 18:38:43.862534  446193 system_pods.go:126] duration metric: took 13.677385ms to wait for k8s-apps to be running ...
	I0924 18:38:43.862546  446193 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:38:43.862601  446193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:38:43.880895  446193 system_svc.go:56] duration metric: took 18.337961ms WaitForService to wait for kubelet
	I0924 18:38:43.880928  446193 kubeadm.go:582] duration metric: took 25.82907335s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:38:43.880955  446193 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:38:43.884551  446193 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0924 18:38:43.884586  446193 node_conditions.go:123] node cpu capacity is 2
	I0924 18:38:43.884600  446193 node_conditions.go:105] duration metric: took 3.638054ms to run NodePressure ...
	I0924 18:38:43.884622  446193 start.go:241] waiting for startup goroutines ...
	I0924 18:38:44.096238  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:44.103232  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:44.354805  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:44.601688  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:44.604343  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:44.854222  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:45.105301  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:45.127998  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:45.359046  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:45.596866  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:45.602759  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:45.854242  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:46.096423  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:46.102879  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:46.354347  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:46.596621  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:46.607361  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:46.853679  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:47.097124  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:47.103289  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:47.354361  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:47.605281  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:47.642014  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:47.854396  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:48.096571  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:48.102828  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:48.355292  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:48.597650  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:48.601813  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:48.855282  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:49.096627  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:49.101782  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:49.354395  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:49.598021  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:49.605177  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:49.854527  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:38:50.097008  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:50.103585  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:50.358710  446193 kapi.go:107] duration metric: took 21.50871844s to wait for kubernetes.io/minikube-addons=registry ...
	I0924 18:38:50.597163  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:50.603917  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:51.098676  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:51.104700  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:51.597228  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:51.602509  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:52.097110  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:52.102852  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:52.596636  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:52.602160  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:53.096641  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:53.102472  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:53.596483  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:53.603045  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:54.098277  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:54.110678  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:54.596806  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:54.601838  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:55.096080  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:55.102875  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:55.596632  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:55.601999  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:56.096444  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:56.102880  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:56.596801  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:56.602651  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:57.096716  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:57.101962  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:57.596467  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:57.602690  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:58.096699  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:58.102834  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:58.596237  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:58.603408  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:59.096621  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:59.102633  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:38:59.603985  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:38:59.616546  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:00.112085  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:00.122802  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:00.597064  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:00.602578  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:01.097058  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:01.102801  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:01.596921  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:01.602641  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:02.096752  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:02.102138  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:02.596133  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:02.602410  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:03.096559  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:03.102327  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:03.596673  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:03.601981  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:04.099888  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:04.105167  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:04.598290  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:04.608547  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:05.097629  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:05.108451  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:05.598895  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:05.607672  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:06.097487  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:06.103998  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:06.598715  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:06.608641  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:07.097855  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:07.103518  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:07.596946  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:07.603327  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:08.096272  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:08.102472  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:08.598013  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:08.602704  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:09.096047  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:09.102654  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:09.597851  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:09.603894  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:10.097911  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:10.103532  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:10.602313  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:10.605964  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:11.096926  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:11.102665  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:11.596968  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:11.602195  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:12.100261  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:12.104905  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:12.597842  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:12.602367  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:13.096511  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:13.103254  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:13.595920  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:13.605765  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:14.095896  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:14.102305  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:14.596578  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:14.605153  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:15.097220  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:15.103992  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:15.597086  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:15.602553  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:16.096954  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:16.102821  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:16.595841  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:16.602609  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:17.096427  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:17.102610  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:17.595743  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:17.602096  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:18.096142  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:18.102616  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:18.596246  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:18.602930  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:19.096402  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:19.102871  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:19.596616  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:19.608212  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:20.101125  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:20.106781  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:20.596618  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:20.602669  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:21.097133  446193 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:39:21.103152  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:21.617785  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:21.619143  446193 kapi.go:107] duration metric: took 55.027467759s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0924 18:39:22.103388  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:22.616148  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:23.107387  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:23.604351  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:24.103356  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:24.603239  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:25.105304  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:25.602578  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:26.103917  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:26.602800  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:27.102840  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:27.604541  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:28.102964  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:28.606241  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:29.102226  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:29.603033  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:30.104148  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:30.603092  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:39:31.103267  446193 kapi.go:107] duration metric: took 1m1.505663514s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0924 18:39:54.033520  446193 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 18:39:54.033542  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:54.457871  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:54.955923  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:55.458457  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:55.956294  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:56.457499  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:56.956241  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:57.458363  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:57.956615  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:58.457617  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:58.956374  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:59.458294  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:39:59.955314  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:00.513232  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:00.957328  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:01.456958  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:01.957290  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:02.457191  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:02.955920  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:03.459180  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:03.956421  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:04.458758  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:04.955979  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:05.459090  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:05.955768  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:06.457717  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:06.955704  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:07.457752  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:07.955210  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:08.458103  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:08.956501  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:09.457057  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:09.956524  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:10.457933  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:10.955786  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:11.457138  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:11.956624  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:12.458569  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:12.955936  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:13.457909  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:13.955804  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:14.456080  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:14.955563  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:15.458823  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:15.955820  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:16.456866  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:16.955532  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:17.457595  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:17.956416  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:18.455941  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:18.956166  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:19.458554  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:19.956387  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:20.459846  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:20.956096  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:21.458468  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:21.955767  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:22.458412  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:22.956126  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:23.458769  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:23.956494  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:24.457017  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:24.955431  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:25.457996  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:25.955695  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:26.459062  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:26.955740  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:27.457902  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:27.955521  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:28.459332  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:28.956305  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:29.457175  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:29.955407  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:30.457790  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:30.955268  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:31.457710  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:31.955875  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:32.457770  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:32.956378  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:33.456046  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:33.960635  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:34.461569  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:34.956839  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:35.458091  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:35.956068  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:36.459698  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:36.956199  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:37.460663  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:37.955823  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:38.458660  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:38.956070  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:39.458307  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:39.956352  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:40.458742  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:40.955462  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:41.458227  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:41.955533  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:42.458860  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:42.955712  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:43.457022  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:43.955898  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:44.457897  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:44.955306  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:45.457565  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:45.956733  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:46.457221  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:46.955717  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:47.458471  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:47.956029  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:48.456491  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:48.955880  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:49.457915  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:49.956321  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:50.459511  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:50.956515  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:51.457708  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:51.955345  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:52.458653  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:52.956379  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:53.457478  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:53.956587  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:54.457512  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:54.955870  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:55.459305  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:55.955913  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:56.458579  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:56.955834  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:57.457978  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:57.955452  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:58.457290  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:58.955667  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:59.460409  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:40:59.955611  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:41:00.456734  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:41:00.955608  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:41:01.457492  446193 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:41:01.956185  446193 kapi.go:107] duration metric: took 2m31.003976368s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0924 18:41:01.958418  446193 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-783184 cluster.
	I0924 18:41:01.960332  446193 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0924 18:41:01.962355  446193 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0924 18:41:01.964139  446193 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, metrics-server, inspektor-gadget, volcano, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0924 18:41:01.966320  446193 addons.go:510] duration metric: took 2m43.914141393s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner-rancher storage-provisioner metrics-server inspektor-gadget volcano yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0924 18:41:01.966386  446193 start.go:246] waiting for cluster config update ...
	I0924 18:41:01.966408  446193 start.go:255] writing updated cluster config ...
	I0924 18:41:01.966708  446193 ssh_runner.go:195] Run: rm -f paused
	I0924 18:41:02.337758  446193 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 18:41:02.340051  446193 out.go:177] * Done! kubectl is now configured to use "addons-783184" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	461faadb3012b       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   f7278c4144105       gadget-n4hxg
	82bbe7e8c6233       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   e35cb80b799d1       gcp-auth-89d5ffd79-zwg5g
	d42610c565805       1a9605c872c1d       4 minutes ago       Running             admission                                0                   9186de4ac8587       volcano-admission-5874dfdd79-8t7dh
	74ee17c1596ed       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   41ff07bf3cbad       csi-hostpathplugin-qwhxj
	d6845875942d2       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   41ff07bf3cbad       csi-hostpathplugin-qwhxj
	2437ec59c8b99       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   41ff07bf3cbad       csi-hostpathplugin-qwhxj
	21a6770853af3       08f6b2990811a       4 minutes ago       Running             hostpath                                 0                   41ff07bf3cbad       csi-hostpathplugin-qwhxj
	14fd41cae83c2       0107d56dbc0be       4 minutes ago       Running             node-driver-registrar                    0                   41ff07bf3cbad       csi-hostpathplugin-qwhxj
	368d4097eb2e9       289a818c8d9c5       4 minutes ago       Running             controller                               0                   ae3236aa954fd       ingress-nginx-controller-bc57996ff-n4zlp
	7640e76b622f9       6aa88c604f2b4       5 minutes ago       Running             volcano-scheduler                        0                   95c030184b4ec       volcano-scheduler-6c9778cbdf-97rkk
	ffaa5a5fab821       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   41ff07bf3cbad       csi-hostpathplugin-qwhxj
	0c3214113757d       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   1d1c4563b824e       csi-hostpath-attacher-0
	96e24669fd000       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   db2ed070cd432       csi-hostpath-resizer-0
	54d3874c16644       23cbb28ae641a       5 minutes ago       Running             volcano-controllers                      0                   561075eca4c1e       volcano-controllers-789ffc5785-pn52r
	5a9653fc0ec74       420193b27261a       5 minutes ago       Exited              patch                                    0                   028ec198ef163       ingress-nginx-admission-patch-tlf2n
	e1b3941396bd9       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   f347d06e5eb7f       metrics-server-84c5f94fbc-4ckmr
	be1619d396ec0       77bdba588b953       5 minutes ago       Running             yakd                                     0                   3bb64e9f90ac9       yakd-dashboard-67d98fc6b-psgsz
	738b7d5141141       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   0e964c279e8b9       cloud-spanner-emulator-5b584cc74-zz9tw
	1c8d4d1ab83e8       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   b025c23171e85       snapshot-controller-56fcc65765-tvr46
	1aff52409cd9f       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   c6dd88e739fc5       snapshot-controller-56fcc65765-t5bh5
	5aeb6bec63f52       420193b27261a       5 minutes ago       Exited              create                                   0                   cf8790eaec7eb       ingress-nginx-admission-create-f59th
	8081626da1e30       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   1b19eaa9db1f5       registry-proxy-k6vdg
	08ea43aacbb9a       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   131f177e8ae47       local-path-provisioner-86d989889c-qg9ql
	73e6590eb0ef6       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   9036090c33a4a       registry-66c9cd494c-r9m2n
	8317b49b3ea7a       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   70f3c20510ca1       nvidia-device-plugin-daemonset-6tcpv
	cbfeebbb3b840       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   571bf244b49c8       coredns-7c65d6cfc9-l42hj
	ffc0135eba33e       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   70c898bfef28f       kube-ingress-dns-minikube
	2bde223b96e60       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   85f17198f20db       storage-provisioner
	2205ff1511b99       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   c275a12cc86db       kindnet-x95vn
	ba4aea080842e       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   e3c5dc847e4aa       kube-proxy-7c74q
	ee4d2d1bda823       27e3830e14027       6 minutes ago       Running             etcd                                     0                   2515b698bed9d       etcd-addons-783184
	bb4686f9e0857       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   8b5845c204adc       kube-scheduler-addons-783184
	5da91b481ae81       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   fae79f75947e7       kube-apiserver-addons-783184
	5289d6335c9b8       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   d500e3b77d3e3       kube-controller-manager-addons-783184
	
	
	==> containerd <==
	Sep 24 18:42:09 addons-783184 containerd[822]: time="2024-09-24T18:42:09.947080164Z" level=info msg="CreateContainer within sandbox \"f7278c414410534755652ed02ca8452164707d265ff1609f599e1bc45ee411fd\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 24 18:42:09 addons-783184 containerd[822]: time="2024-09-24T18:42:09.967409163Z" level=info msg="CreateContainer within sandbox \"f7278c414410534755652ed02ca8452164707d265ff1609f599e1bc45ee411fd\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411\""
	Sep 24 18:42:09 addons-783184 containerd[822]: time="2024-09-24T18:42:09.968145945Z" level=info msg="StartContainer for \"461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411\""
	Sep 24 18:42:10 addons-783184 containerd[822]: time="2024-09-24T18:42:10.049046337Z" level=info msg="StartContainer for \"461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411\" returns successfully"
	Sep 24 18:42:11 addons-783184 containerd[822]: time="2024-09-24T18:42:11.525274283Z" level=error msg="ExecSync for \"461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411\" failed" error="failed to exec in container: failed to start exec \"bd929bc6225942655bdae4a27c78740ae3a975e6f7d328911109df45439014cc\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 24 18:42:11 addons-783184 containerd[822]: time="2024-09-24T18:42:11.536811475Z" level=error msg="ExecSync for \"461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411\" failed" error="failed to exec in container: failed to start exec \"44617c49b31b95adf054beee2dc34876fdab79a121cb0093a916c29d8db31395\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 24 18:42:11 addons-783184 containerd[822]: time="2024-09-24T18:42:11.550901281Z" level=error msg="ExecSync for \"461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411\" failed" error="failed to exec in container: failed to start exec \"363479750c9da7beb857d878e6ca4469fb6367b4aded5cba8b6a13456a67a028\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 24 18:42:11 addons-783184 containerd[822]: time="2024-09-24T18:42:11.565681797Z" level=error msg="ExecSync for \"461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411\" failed" error="failed to exec in container: failed to start exec \"836167e868769ad0e2826848ab8b5963d9ca12f20e8fa55304167908db18373b\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 24 18:42:11 addons-783184 containerd[822]: time="2024-09-24T18:42:11.575215392Z" level=error msg="ExecSync for \"461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411\" failed" error="failed to exec in container: failed to start exec \"4cefd5eb6e324b3d2e72f55a2534a85caf908328e8e98c69afcb9c20c1877a52\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 24 18:42:11 addons-783184 containerd[822]: time="2024-09-24T18:42:11.585488863Z" level=error msg="ExecSync for \"461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411\" failed" error="failed to exec in container: failed to start exec \"aff7fed6e87927d28b750b8fe81500440a2818664cf03259955128cfda7c8549\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 24 18:42:11 addons-783184 containerd[822]: time="2024-09-24T18:42:11.682990823Z" level=info msg="shim disconnected" id=461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411 namespace=k8s.io
	Sep 24 18:42:11 addons-783184 containerd[822]: time="2024-09-24T18:42:11.683052648Z" level=warning msg="cleaning up after shim disconnected" id=461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411 namespace=k8s.io
	Sep 24 18:42:11 addons-783184 containerd[822]: time="2024-09-24T18:42:11.683391091Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.150668702Z" level=info msg="RemoveContainer for \"fb1eea796ca2c6b75216a58df461863e9e422288299124f8da101fb7676893e7\""
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.160024313Z" level=info msg="RemoveContainer for \"fb1eea796ca2c6b75216a58df461863e9e422288299124f8da101fb7676893e7\" returns successfully"
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.963801702Z" level=info msg="RemoveContainer for \"ef6ca4dc48a13ea7d1885cb8f9244f00dc4ed41db6618ff9b3cf9921177ae291\""
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.970019185Z" level=info msg="RemoveContainer for \"ef6ca4dc48a13ea7d1885cb8f9244f00dc4ed41db6618ff9b3cf9921177ae291\" returns successfully"
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.973087356Z" level=info msg="StopPodSandbox for \"550dd51047ae1baa226c61c87a286cce3ed594e51e0c82b9693a9517378f51f3\""
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.981163189Z" level=info msg="TearDown network for sandbox \"550dd51047ae1baa226c61c87a286cce3ed594e51e0c82b9693a9517378f51f3\" successfully"
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.981202213Z" level=info msg="StopPodSandbox for \"550dd51047ae1baa226c61c87a286cce3ed594e51e0c82b9693a9517378f51f3\" returns successfully"
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.981771800Z" level=info msg="RemovePodSandbox for \"550dd51047ae1baa226c61c87a286cce3ed594e51e0c82b9693a9517378f51f3\""
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.981893408Z" level=info msg="Forcibly stopping sandbox \"550dd51047ae1baa226c61c87a286cce3ed594e51e0c82b9693a9517378f51f3\""
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.989335612Z" level=info msg="TearDown network for sandbox \"550dd51047ae1baa226c61c87a286cce3ed594e51e0c82b9693a9517378f51f3\" successfully"
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.995696856Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"550dd51047ae1baa226c61c87a286cce3ed594e51e0c82b9693a9517378f51f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 24 18:42:12 addons-783184 containerd[822]: time="2024-09-24T18:42:12.995809668Z" level=info msg="RemovePodSandbox \"550dd51047ae1baa226c61c87a286cce3ed594e51e0c82b9693a9517378f51f3\" returns successfully"
	
	
	==> coredns [cbfeebbb3b84043ee8cdcd0341c633886d4b772f6f46f7f6fdd99e893f8e318a] <==
	[INFO] 10.244.0.5:38318 - 48301 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084323s
	[INFO] 10.244.0.5:33857 - 35519 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002446656s
	[INFO] 10.244.0.5:33857 - 9402 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002211204s
	[INFO] 10.244.0.5:45486 - 24160 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000255407s
	[INFO] 10.244.0.5:45486 - 37730 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000224655s
	[INFO] 10.244.0.5:44968 - 46527 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137287s
	[INFO] 10.244.0.5:44968 - 42426 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000057493s
	[INFO] 10.244.0.5:47069 - 52110 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097549s
	[INFO] 10.244.0.5:47069 - 42892 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040722s
	[INFO] 10.244.0.5:59277 - 43616 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107634s
	[INFO] 10.244.0.5:59277 - 22626 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039803s
	[INFO] 10.244.0.5:34886 - 13524 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001742744s
	[INFO] 10.244.0.5:34886 - 32210 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001713773s
	[INFO] 10.244.0.5:53575 - 1766 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072664s
	[INFO] 10.244.0.5:53575 - 15072 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000102546s
	[INFO] 10.244.0.24:49226 - 11987 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000223342s
	[INFO] 10.244.0.24:45443 - 53742 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000105772s
	[INFO] 10.244.0.24:49940 - 58220 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115191s
	[INFO] 10.244.0.24:49842 - 52223 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000093119s
	[INFO] 10.244.0.24:39077 - 36814 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122017s
	[INFO] 10.244.0.24:34950 - 55838 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096402s
	[INFO] 10.244.0.24:53464 - 8009 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002214638s
	[INFO] 10.244.0.24:53955 - 26832 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003114257s
	[INFO] 10.244.0.24:47173 - 58719 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002589461s
	[INFO] 10.244.0.24:60127 - 32733 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002532387s
	
	
	==> describe nodes <==
	Name:               addons-783184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-783184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=addons-783184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_38_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-783184
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-783184"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:38:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-783184
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:44:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:41:16 +0000   Tue, 24 Sep 2024 18:38:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:41:16 +0000   Tue, 24 Sep 2024 18:38:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:41:16 +0000   Tue, 24 Sep 2024 18:38:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:41:16 +0000   Tue, 24 Sep 2024 18:38:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-783184
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 b342e9f398804b1b8d005ed3ab1d2e06
	  System UUID:                cc706a0c-3443-48d9-ab9f-b237a91ce695
	  Boot ID:                    c9bfc008-5ebc-46f8-9dcc-99ff4d0a3684
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-zz9tw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  gadget                      gadget-n4hxg                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-zwg5g                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-n4zlp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-l42hj                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-qwhxj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-783184                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-x95vn                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-783184                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-783184       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-7c74q                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-783184                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-84c5f94fbc-4ckmr             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-6tcpv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 registry-66c9cd494c-r9m2n                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-k6vdg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-t5bh5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-tvr46        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-qg9ql     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  volcano-system              volcano-admission-5874dfdd79-8t7dh          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-789ffc5785-pn52r        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-scheduler-6c9778cbdf-97rkk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-psgsz              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node addons-783184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m15s (x7 over 6m15s)  kubelet          Node addons-783184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node addons-783184 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s                   kubelet          Node addons-783184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-783184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s                   kubelet          Node addons-783184 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s                   node-controller  Node addons-783184 event: Registered Node addons-783184 in Controller
	
	
	==> dmesg <==
	[Sep24 17:35] hrtimer: interrupt took 23501182 ns
	[Sep24 18:05] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep24 18:11] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.000187] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.018249] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[Sep24 18:22] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	
	
	==> etcd [ee4d2d1bda82362b5134d3a1575f541916de64a50e76c83edd91cac3bcdc56a1] <==
	{"level":"info","ts":"2024-09-24T18:38:07.234907Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T18:38:07.235039Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-24T18:38:07.235060Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-24T18:38:07.236334Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T18:38:07.236370Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T18:38:08.057450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-24T18:38:08.057560Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-24T18:38:08.057607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-24T18:38:08.057679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T18:38:08.057712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-24T18:38:08.057758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-24T18:38:08.057803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-24T18:38:08.061534Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:38:08.065646Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-783184 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T18:38:08.065734Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T18:38:08.066052Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T18:38:08.066823Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T18:38:08.067802Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T18:38:08.068668Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T18:38:08.090363Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-24T18:38:08.069134Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:38:08.077953Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T18:38:08.118076Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T18:38:08.137562Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:38:08.137673Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [82bbe7e8c62337dd0a7a91036200b1d880af05f73869b2e89adab475fa77acbc] <==
	2024/09/24 18:41:01 GCP Auth Webhook started!
	2024/09/24 18:41:18 Ready to marshal response ...
	2024/09/24 18:41:18 Ready to write response ...
	2024/09/24 18:41:19 Ready to marshal response ...
	2024/09/24 18:41:19 Ready to write response ...
	
	
	==> kernel <==
	 18:44:21 up  2:26,  0 users,  load average: 0.15, 1.17, 2.26
	Linux addons-783184 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2205ff1511b99f2df9ded57048ffa142d13e8e556de5f14db54261482865494b] <==
	I0924 18:42:19.910603       1 main.go:299] handling current node
	I0924 18:42:29.917020       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:42:29.917055       1 main.go:299] handling current node
	I0924 18:42:39.918271       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:42:39.918304       1 main.go:299] handling current node
	I0924 18:42:49.913114       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:42:49.913156       1 main.go:299] handling current node
	I0924 18:42:59.912672       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:42:59.912708       1 main.go:299] handling current node
	I0924 18:43:09.917885       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:43:09.917924       1 main.go:299] handling current node
	I0924 18:43:19.910180       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:43:19.910216       1 main.go:299] handling current node
	I0924 18:43:29.910758       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:43:29.910792       1 main.go:299] handling current node
	I0924 18:43:39.918749       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:43:39.918782       1 main.go:299] handling current node
	I0924 18:43:49.917896       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:43:49.918128       1 main.go:299] handling current node
	I0924 18:43:59.911181       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:43:59.911221       1 main.go:299] handling current node
	I0924 18:44:09.912692       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:44:09.912728       1 main.go:299] handling current node
	I0924 18:44:19.910604       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 18:44:19.910671       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5da91b481ae81ff0bd32731d22c5d83e3544e9421b26ab971eb94ed4406a0773] <==
	W0924 18:39:28.241112       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:29.283532       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:30.331353       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:31.427988       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:32.464690       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:33.534768       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:33.831428       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.226.135:443: connect: connection refused
	E0924 18:39:33.831484       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.226.135:443: connect: connection refused" logger="UnhandledError"
	W0924 18:39:33.833124       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:33.922351       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.226.135:443: connect: connection refused
	E0924 18:39:33.922394       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.226.135:443: connect: connection refused" logger="UnhandledError"
	W0924 18:39:33.924110       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:34.548358       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:35.566892       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:36.575063       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:37.642185       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:38.725164       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.210:443: connect: connection refused
	W0924 18:39:53.860419       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.226.135:443: connect: connection refused
	E0924 18:39:53.860460       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.226.135:443: connect: connection refused" logger="UnhandledError"
	W0924 18:40:33.841833       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.226.135:443: connect: connection refused
	E0924 18:40:33.841880       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.226.135:443: connect: connection refused" logger="UnhandledError"
	W0924 18:40:33.930229       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.226.135:443: connect: connection refused
	E0924 18:40:33.930268       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.226.135:443: connect: connection refused" logger="UnhandledError"
	I0924 18:41:18.883793       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0924 18:41:19.004054       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [5289d6335c9b8d12f61f68ca91107c3f876c40b82b998bcf359ceeadd7176468] <==
	I0924 18:40:33.868293       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 18:40:33.868698       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 18:40:33.889126       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 18:40:33.938849       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 18:40:33.945786       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 18:40:33.949622       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 18:40:33.969565       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 18:40:34.822647       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 18:40:34.847227       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 18:40:35.944924       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 18:40:35.963972       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 18:40:36.951579       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 18:40:36.961548       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 18:40:36.972182       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 18:40:36.976987       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 18:40:36.986542       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 18:40:36.998087       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 18:41:01.920995       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="8.639022ms"
	I0924 18:41:01.923198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="39.031µs"
	I0924 18:41:06.041279       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0924 18:41:06.043545       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0924 18:41:06.117619       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0924 18:41:06.121136       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0924 18:41:16.466724       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-783184"
	I0924 18:41:18.583304       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [ba4aea080842e2af5d99b2f7d92ecf3d40936d0d99bef7f04b023e4f9977f978] <==
	I0924 18:38:19.344117       1 server_linux.go:66] "Using iptables proxy"
	I0924 18:38:19.444889       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0924 18:38:19.444965       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:38:19.501529       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0924 18:38:19.501598       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:38:19.505978       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:38:19.523487       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:38:19.523522       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:38:19.531750       1 config.go:199] "Starting service config controller"
	I0924 18:38:19.531802       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:38:19.532147       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:38:19.532177       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:38:19.536100       1 config.go:328] "Starting node config controller"
	I0924 18:38:19.536205       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:38:19.632560       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:38:19.632630       1 shared_informer.go:320] Caches are synced for service config
	I0924 18:38:19.637564       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bb4686f9e0857ea5b5ba62067e2fd37fa63709a3aca17fef20e3288d872e5f67] <==
	W0924 18:38:10.433131       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:38:10.433158       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 18:38:10.441685       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:38:10.441965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:38:10.445295       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 18:38:10.445574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 18:38:10.445800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 18:38:10.445904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:38:10.446133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 18:38:10.446233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:38:10.446438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 18:38:10.446531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:38:11.265153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 18:38:11.265198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:38:11.289766       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 18:38:11.290064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:38:11.450047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 18:38:11.450309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:38:11.468672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:38:11.468888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:38:11.474473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 18:38:11.474522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:38:11.496054       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 18:38:11.496320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0924 18:38:12.007623       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 18:42:15 addons-783184 kubelet[1497]: E0924 18:42:15.686185    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-n4hxg_gadget(766b85f6-6e39-42ae-a8d7-d232d3903192)\"" pod="gadget/gadget-n4hxg" podUID="766b85f6-6e39-42ae-a8d7-d232d3903192"
	Sep 24 18:42:24 addons-783184 kubelet[1497]: I0924 18:42:24.807013    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-r9m2n" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 18:42:25 addons-783184 kubelet[1497]: I0924 18:42:25.807876    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-6tcpv" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 18:42:28 addons-783184 kubelet[1497]: I0924 18:42:28.807864    1497 scope.go:117] "RemoveContainer" containerID="461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411"
	Sep 24 18:42:28 addons-783184 kubelet[1497]: E0924 18:42:28.808059    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-n4hxg_gadget(766b85f6-6e39-42ae-a8d7-d232d3903192)\"" pod="gadget/gadget-n4hxg" podUID="766b85f6-6e39-42ae-a8d7-d232d3903192"
	Sep 24 18:42:41 addons-783184 kubelet[1497]: I0924 18:42:41.807946    1497 scope.go:117] "RemoveContainer" containerID="461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411"
	Sep 24 18:42:41 addons-783184 kubelet[1497]: E0924 18:42:41.808151    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-n4hxg_gadget(766b85f6-6e39-42ae-a8d7-d232d3903192)\"" pod="gadget/gadget-n4hxg" podUID="766b85f6-6e39-42ae-a8d7-d232d3903192"
	Sep 24 18:42:48 addons-783184 kubelet[1497]: I0924 18:42:48.807330    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-k6vdg" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 18:42:56 addons-783184 kubelet[1497]: I0924 18:42:56.807875    1497 scope.go:117] "RemoveContainer" containerID="461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411"
	Sep 24 18:42:56 addons-783184 kubelet[1497]: E0924 18:42:56.808069    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-n4hxg_gadget(766b85f6-6e39-42ae-a8d7-d232d3903192)\"" pod="gadget/gadget-n4hxg" podUID="766b85f6-6e39-42ae-a8d7-d232d3903192"
	Sep 24 18:43:11 addons-783184 kubelet[1497]: I0924 18:43:11.807877    1497 scope.go:117] "RemoveContainer" containerID="461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411"
	Sep 24 18:43:11 addons-783184 kubelet[1497]: E0924 18:43:11.808064    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-n4hxg_gadget(766b85f6-6e39-42ae-a8d7-d232d3903192)\"" pod="gadget/gadget-n4hxg" podUID="766b85f6-6e39-42ae-a8d7-d232d3903192"
	Sep 24 18:43:24 addons-783184 kubelet[1497]: I0924 18:43:24.807209    1497 scope.go:117] "RemoveContainer" containerID="461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411"
	Sep 24 18:43:24 addons-783184 kubelet[1497]: E0924 18:43:24.807406    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-n4hxg_gadget(766b85f6-6e39-42ae-a8d7-d232d3903192)\"" pod="gadget/gadget-n4hxg" podUID="766b85f6-6e39-42ae-a8d7-d232d3903192"
	Sep 24 18:43:33 addons-783184 kubelet[1497]: I0924 18:43:33.807965    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-r9m2n" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 18:43:35 addons-783184 kubelet[1497]: I0924 18:43:35.807239    1497 scope.go:117] "RemoveContainer" containerID="461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411"
	Sep 24 18:43:35 addons-783184 kubelet[1497]: E0924 18:43:35.807859    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-n4hxg_gadget(766b85f6-6e39-42ae-a8d7-d232d3903192)\"" pod="gadget/gadget-n4hxg" podUID="766b85f6-6e39-42ae-a8d7-d232d3903192"
	Sep 24 18:43:49 addons-783184 kubelet[1497]: I0924 18:43:49.806962    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-6tcpv" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 18:43:49 addons-783184 kubelet[1497]: I0924 18:43:49.807039    1497 scope.go:117] "RemoveContainer" containerID="461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411"
	Sep 24 18:43:49 addons-783184 kubelet[1497]: E0924 18:43:49.807669    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-n4hxg_gadget(766b85f6-6e39-42ae-a8d7-d232d3903192)\"" pod="gadget/gadget-n4hxg" podUID="766b85f6-6e39-42ae-a8d7-d232d3903192"
	Sep 24 18:43:57 addons-783184 kubelet[1497]: I0924 18:43:57.806984    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-k6vdg" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 18:44:00 addons-783184 kubelet[1497]: I0924 18:44:00.807642    1497 scope.go:117] "RemoveContainer" containerID="461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411"
	Sep 24 18:44:00 addons-783184 kubelet[1497]: E0924 18:44:00.807824    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-n4hxg_gadget(766b85f6-6e39-42ae-a8d7-d232d3903192)\"" pod="gadget/gadget-n4hxg" podUID="766b85f6-6e39-42ae-a8d7-d232d3903192"
	Sep 24 18:44:13 addons-783184 kubelet[1497]: I0924 18:44:13.807426    1497 scope.go:117] "RemoveContainer" containerID="461faadb3012b5090ff3df734ab3983893efa078f39713a04b3ec24d93985411"
	Sep 24 18:44:13 addons-783184 kubelet[1497]: E0924 18:44:13.807631    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-n4hxg_gadget(766b85f6-6e39-42ae-a8d7-d232d3903192)\"" pod="gadget/gadget-n4hxg" podUID="766b85f6-6e39-42ae-a8d7-d232d3903192"
	
	
	==> storage-provisioner [2bde223b96e609669b733bddcb24aad97765ce0342f2fd61c523a72defae2f04] <==
	I0924 18:38:24.296343       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 18:38:24.317043       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 18:38:24.317112       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 18:38:24.343202       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 18:38:24.343470       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-783184_c6c94de1-1dab-40dc-9f2d-cbe1ad17268c!
	I0924 18:38:24.343996       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad7ecc51-4668-4e8b-807d-e5d6fa3dc1ee", APIVersion:"v1", ResourceVersion:"557", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-783184_c6c94de1-1dab-40dc-9f2d-cbe1ad17268c became leader
	I0924 18:38:24.443789       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-783184_c6c94de1-1dab-40dc-9f2d-cbe1ad17268c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-783184 -n addons-783184
helpers_test.go:261: (dbg) Run:  kubectl --context addons-783184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-f59th ingress-nginx-admission-patch-tlf2n test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-783184 describe pod ingress-nginx-admission-create-f59th ingress-nginx-admission-patch-tlf2n test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-783184 describe pod ingress-nginx-admission-create-f59th ingress-nginx-admission-patch-tlf2n test-job-nginx-0: exit status 1 (87.229856ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-f59th" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tlf2n" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-783184 describe pod ingress-nginx-admission-create-f59th ingress-nginx-admission-patch-tlf2n test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.89s)

                                                
                                    

Test pass (299/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.87
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 7.74
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 217.16
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 17.97
34 TestAddons/parallel/Ingress 18.76
35 TestAddons/parallel/InspektorGadget 11.92
36 TestAddons/parallel/MetricsServer 5.81
38 TestAddons/parallel/CSI 39.26
39 TestAddons/parallel/Headlamp 17.07
40 TestAddons/parallel/CloudSpanner 6.68
41 TestAddons/parallel/LocalPath 53.06
42 TestAddons/parallel/NvidiaDevicePlugin 5.74
43 TestAddons/parallel/Yakd 11.83
44 TestAddons/StoppedEnableDisable 12.26
45 TestCertOptions 36.33
46 TestCertExpiration 230.14
48 TestForceSystemdFlag 34.75
49 TestForceSystemdEnv 42.39
50 TestDockerEnvContainerd 44.72
55 TestErrorSpam/setup 28.53
56 TestErrorSpam/start 0.72
57 TestErrorSpam/status 1.08
58 TestErrorSpam/pause 1.79
59 TestErrorSpam/unpause 1.79
60 TestErrorSpam/stop 1.51
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 93.34
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 5.85
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.31
72 TestFunctional/serial/CacheCmd/cache/add_local 1.25
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.99
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 46.88
81 TestFunctional/serial/ComponentHealth 0.15
82 TestFunctional/serial/LogsCmd 1.7
83 TestFunctional/serial/LogsFileCmd 1.76
84 TestFunctional/serial/InvalidService 4.9
86 TestFunctional/parallel/ConfigCmd 0.45
87 TestFunctional/parallel/DashboardCmd 8.67
88 TestFunctional/parallel/DryRun 0.4
89 TestFunctional/parallel/InternationalLanguage 0.23
90 TestFunctional/parallel/StatusCmd 1.24
94 TestFunctional/parallel/ServiceCmdConnect 9.67
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 25.13
98 TestFunctional/parallel/SSHCmd 0.67
99 TestFunctional/parallel/CpCmd 2.25
101 TestFunctional/parallel/FileSync 0.33
102 TestFunctional/parallel/CertSync 2.07
106 TestFunctional/parallel/NodeLabels 0.1
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
110 TestFunctional/parallel/License 0.22
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.49
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
124 TestFunctional/parallel/ServiceCmd/List 0.64
125 TestFunctional/parallel/ProfileCmd/profile_list 0.56
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
129 TestFunctional/parallel/MountCmd/any-port 8.4
130 TestFunctional/parallel/ServiceCmd/Format 0.43
131 TestFunctional/parallel/ServiceCmd/URL 0.42
132 TestFunctional/parallel/MountCmd/specific-port 2.46
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.32
134 TestFunctional/parallel/Version/short 0.08
135 TestFunctional/parallel/Version/components 1.25
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.89
141 TestFunctional/parallel/ImageCommands/Setup 0.79
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.31
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.58
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 131.49
159 TestMultiControlPlane/serial/DeployApp 30.87
160 TestMultiControlPlane/serial/PingHostFromPods 1.66
161 TestMultiControlPlane/serial/AddWorkerNode 21.66
162 TestMultiControlPlane/serial/NodeLabels 0.13
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
164 TestMultiControlPlane/serial/CopyFile 19.27
165 TestMultiControlPlane/serial/StopSecondaryNode 12.83
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
167 TestMultiControlPlane/serial/RestartSecondaryNode 28
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.05
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 130.07
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.76
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
172 TestMultiControlPlane/serial/StopCluster 36.18
173 TestMultiControlPlane/serial/RestartCluster 68.89
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
175 TestMultiControlPlane/serial/AddSecondaryNode 42.73
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
180 TestJSONOutput/start/Command 48.53
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.73
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.65
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.85
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
205 TestKicCustomNetwork/create_custom_network 39.69
206 TestKicCustomNetwork/use_default_bridge_network 33.79
207 TestKicExistingNetwork 34.65
208 TestKicCustomSubnet 33.09
209 TestKicStaticIP 35.14
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 71.98
214 TestMountStart/serial/StartWithMountFirst 7.19
215 TestMountStart/serial/VerifyMountFirst 0.27
216 TestMountStart/serial/StartWithMountSecond 5.93
217 TestMountStart/serial/VerifyMountSecond 0.25
218 TestMountStart/serial/DeleteFirst 1.61
219 TestMountStart/serial/VerifyMountPostDelete 0.25
220 TestMountStart/serial/Stop 1.21
221 TestMountStart/serial/RestartStopped 7.88
222 TestMountStart/serial/VerifyMountPostStop 0.25
225 TestMultiNode/serial/FreshStart2Nodes 65.13
226 TestMultiNode/serial/DeployApp2Nodes 17.28
227 TestMultiNode/serial/PingHostFrom2Pods 1
228 TestMultiNode/serial/AddNode 18.72
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.68
231 TestMultiNode/serial/CopyFile 10.01
232 TestMultiNode/serial/StopNode 2.29
233 TestMultiNode/serial/StartAfterStop 9.79
234 TestMultiNode/serial/RestartKeepsNodes 88.84
235 TestMultiNode/serial/DeleteNode 6
236 TestMultiNode/serial/StopMultiNode 23.98
237 TestMultiNode/serial/RestartMultiNode 56.06
238 TestMultiNode/serial/ValidateNameConflict 33.08
243 TestPreload 124.23
245 TestScheduledStopUnix 106.99
248 TestInsufficientStorage 9.88
249 TestRunningBinaryUpgrade 89.13
251 TestKubernetesUpgrade 101.87
252 TestMissingContainerUpgrade 181.31
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 37.17
256 TestNoKubernetes/serial/StartWithStopK8s 23.01
257 TestNoKubernetes/serial/Start 9.06
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
259 TestNoKubernetes/serial/ProfileList 0.96
260 TestNoKubernetes/serial/Stop 1.21
261 TestNoKubernetes/serial/StartNoArgs 6.8
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
263 TestStoppedBinaryUpgrade/Setup 0.95
264 TestStoppedBinaryUpgrade/Upgrade 126.47
273 TestPause/serial/Start 57.37
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
275 TestPause/serial/SecondStartNoReconfiguration 7.04
276 TestPause/serial/Pause 1.08
277 TestPause/serial/VerifyStatus 0.47
278 TestPause/serial/Unpause 1.04
279 TestPause/serial/PauseAgain 0.99
280 TestPause/serial/DeletePaused 3.08
281 TestPause/serial/VerifyDeletedResources 0.87
289 TestNetworkPlugins/group/false 5.41
294 TestStartStop/group/old-k8s-version/serial/FirstStart 170.81
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.03
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.79
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.46
299 TestStartStop/group/old-k8s-version/serial/Stop 12.44
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
301 TestStartStop/group/old-k8s-version/serial/SecondStart 373.93
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.43
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.11
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.23
307 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
309 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
310 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.05
312 TestStartStop/group/embed-certs/serial/FirstStart 92.28
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.37
316 TestStartStop/group/old-k8s-version/serial/Pause 3.88
318 TestStartStop/group/no-preload/serial/FirstStart 74.72
319 TestStartStop/group/embed-certs/serial/DeployApp 8.36
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
321 TestStartStop/group/embed-certs/serial/Stop 12.21
322 TestStartStop/group/no-preload/serial/DeployApp 9.32
323 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
325 TestStartStop/group/embed-certs/serial/SecondStart 267.24
326 TestStartStop/group/no-preload/serial/Stop 12.3
327 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.34
328 TestStartStop/group/no-preload/serial/SecondStart 276.8
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
332 TestStartStop/group/embed-certs/serial/Pause 3.66
334 TestStartStop/group/newest-cni/serial/FirstStart 41.07
335 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
337 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
338 TestStartStop/group/no-preload/serial/Pause 3.84
339 TestNetworkPlugins/group/auto/Start 99.38
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.79
342 TestStartStop/group/newest-cni/serial/Stop 1.34
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
344 TestStartStop/group/newest-cni/serial/SecondStart 23.97
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
348 TestStartStop/group/newest-cni/serial/Pause 3.35
349 TestNetworkPlugins/group/kindnet/Start 83.96
350 TestNetworkPlugins/group/auto/KubeletFlags 0.29
351 TestNetworkPlugins/group/auto/NetCatPod 10.28
352 TestNetworkPlugins/group/auto/DNS 0.2
353 TestNetworkPlugins/group/auto/Localhost 0.15
354 TestNetworkPlugins/group/auto/HairPin 0.15
355 TestNetworkPlugins/group/calico/Start 70.96
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
358 TestNetworkPlugins/group/kindnet/NetCatPod 9.32
359 TestNetworkPlugins/group/kindnet/DNS 0.25
360 TestNetworkPlugins/group/kindnet/Localhost 0.19
361 TestNetworkPlugins/group/kindnet/HairPin 0.21
362 TestNetworkPlugins/group/custom-flannel/Start 57.07
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.36
365 TestNetworkPlugins/group/calico/NetCatPod 10.51
366 TestNetworkPlugins/group/calico/DNS 0.21
367 TestNetworkPlugins/group/calico/Localhost 0.15
368 TestNetworkPlugins/group/calico/HairPin 0.17
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
371 TestNetworkPlugins/group/enable-default-cni/Start 74.47
372 TestNetworkPlugins/group/custom-flannel/DNS 0.27
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
375 TestNetworkPlugins/group/flannel/Start 52.19
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.27
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
383 TestNetworkPlugins/group/flannel/NetCatPod 9.31
384 TestNetworkPlugins/group/flannel/DNS 0.28
385 TestNetworkPlugins/group/flannel/Localhost 0.22
386 TestNetworkPlugins/group/flannel/HairPin 0.2
387 TestNetworkPlugins/group/bridge/Start 71.31
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 9.26
390 TestNetworkPlugins/group/bridge/DNS 0.22
391 TestNetworkPlugins/group/bridge/Localhost 0.17
392 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (8.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-168781 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-168781 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.870418002s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0924 18:37:15.163008  445436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0924 18:37:15.163099  445436 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-440051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-168781
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-168781: exit status 85 (79.11471ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-168781 | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC |          |
	|         | -p download-only-168781        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:37:06
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:37:06.333960  445441 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:37:06.334159  445441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:37:06.334172  445441 out.go:358] Setting ErrFile to fd 2...
	I0924 18:37:06.334178  445441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:37:06.334454  445441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
	W0924 18:37:06.334630  445441 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19700-440051/.minikube/config/config.json: open /home/jenkins/minikube-integration/19700-440051/.minikube/config/config.json: no such file or directory
	I0924 18:37:06.335080  445441 out.go:352] Setting JSON to true
	I0924 18:37:06.335996  445441 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8376,"bootTime":1727194651,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0924 18:37:06.336076  445441 start.go:139] virtualization:  
	I0924 18:37:06.339173  445441 out.go:97] [download-only-168781] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0924 18:37:06.339350  445441 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19700-440051/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 18:37:06.339388  445441 notify.go:220] Checking for updates...
	I0924 18:37:06.341502  445441 out.go:169] MINIKUBE_LOCATION=19700
	I0924 18:37:06.343786  445441 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:37:06.345565  445441 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	I0924 18:37:06.347594  445441 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	I0924 18:37:06.349390  445441 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0924 18:37:06.353724  445441 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 18:37:06.354099  445441 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:37:06.379454  445441 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 18:37:06.379563  445441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:37:06.446275  445441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 18:37:06.436414739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:37:06.446389  445441 docker.go:318] overlay module found
	I0924 18:37:06.448527  445441 out.go:97] Using the docker driver based on user configuration
	I0924 18:37:06.448560  445441 start.go:297] selected driver: docker
	I0924 18:37:06.448568  445441 start.go:901] validating driver "docker" against <nil>
	I0924 18:37:06.448691  445441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:37:06.505879  445441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 18:37:06.495590988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:37:06.506101  445441 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:37:06.506415  445441 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0924 18:37:06.506617  445441 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 18:37:06.509101  445441 out.go:169] Using Docker driver with root privileges
	I0924 18:37:06.510890  445441 cni.go:84] Creating CNI manager for ""
	I0924 18:37:06.510988  445441 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 18:37:06.511000  445441 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 18:37:06.511132  445441 start.go:340] cluster config:
	{Name:download-only-168781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-168781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:37:06.513474  445441 out.go:97] Starting "download-only-168781" primary control-plane node in "download-only-168781" cluster
	I0924 18:37:06.513523  445441 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0924 18:37:06.515813  445441 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0924 18:37:06.515866  445441 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0924 18:37:06.515956  445441 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 18:37:06.532419  445441 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 18:37:06.532648  445441 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0924 18:37:06.532753  445441 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 18:37:06.575724  445441 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0924 18:37:06.575751  445441 cache.go:56] Caching tarball of preloaded images
	I0924 18:37:06.575893  445441 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0924 18:37:06.578117  445441 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0924 18:37:06.578140  445441 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0924 18:37:06.679229  445441 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19700-440051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-168781 host does not exist
	  To start a cluster, run: "minikube start -p download-only-168781"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-168781
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-679007 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-679007 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.738618704s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0924 18:37:23.309668  445436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0924 18:37:23.309708  445436 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-440051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-679007
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-679007: exit status 85 (73.034145ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-168781 | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC |                     |
	|         | -p download-only-168781        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:37 UTC |
	| delete  | -p download-only-168781        | download-only-168781 | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC | 24 Sep 24 18:37 UTC |
	| start   | -o=json --download-only        | download-only-679007 | jenkins | v1.34.0 | 24 Sep 24 18:37 UTC |                     |
	|         | -p download-only-679007        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:37:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:37:15.620666  445644 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:37:15.620807  445644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:37:15.620816  445644 out.go:358] Setting ErrFile to fd 2...
	I0924 18:37:15.620821  445644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:37:15.621061  445644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
	I0924 18:37:15.621486  445644 out.go:352] Setting JSON to true
	I0924 18:37:15.622344  445644 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8385,"bootTime":1727194651,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0924 18:37:15.622416  445644 start.go:139] virtualization:  
	I0924 18:37:15.625546  445644 out.go:97] [download-only-679007] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 18:37:15.625792  445644 notify.go:220] Checking for updates...
	I0924 18:37:15.628136  445644 out.go:169] MINIKUBE_LOCATION=19700
	I0924 18:37:15.630677  445644 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:37:15.632606  445644 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	I0924 18:37:15.634919  445644 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	I0924 18:37:15.637093  445644 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0924 18:37:15.641485  445644 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 18:37:15.641770  445644 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:37:15.675839  445644 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 18:37:15.675974  445644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:37:15.725005  445644 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-24 18:37:15.714575354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:37:15.725127  445644 docker.go:318] overlay module found
	I0924 18:37:15.727386  445644 out.go:97] Using the docker driver based on user configuration
	I0924 18:37:15.727420  445644 start.go:297] selected driver: docker
	I0924 18:37:15.727428  445644 start.go:901] validating driver "docker" against <nil>
	I0924 18:37:15.727536  445644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:37:15.787971  445644 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-24 18:37:15.776622243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:37:15.788221  445644 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:37:15.788534  445644 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0924 18:37:15.788693  445644 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 18:37:15.791245  445644 out.go:169] Using Docker driver with root privileges
	I0924 18:37:15.793091  445644 cni.go:84] Creating CNI manager for ""
	I0924 18:37:15.793160  445644 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 18:37:15.793177  445644 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 18:37:15.793265  445644 start.go:340] cluster config:
	{Name:download-only-679007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-679007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:37:15.795477  445644 out.go:97] Starting "download-only-679007" primary control-plane node in "download-only-679007" cluster
	I0924 18:37:15.795501  445644 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0924 18:37:15.797774  445644 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0924 18:37:15.797800  445644 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 18:37:15.797959  445644 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 18:37:15.812841  445644 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 18:37:15.812983  445644 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0924 18:37:15.813007  445644 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0924 18:37:15.813016  445644 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0924 18:37:15.813024  445644 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0924 18:37:15.852633  445644 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0924 18:37:15.852659  445644 cache.go:56] Caching tarball of preloaded images
	I0924 18:37:15.852816  445644 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 18:37:15.854908  445644 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0924 18:37:15.854942  445644 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0924 18:37:15.942688  445644 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19700-440051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-679007 host does not exist
	  To start a cluster, run: "minikube start -p download-only-679007"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-679007
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0924 18:37:24.539014  445436 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-636468 --alsologtostderr --binary-mirror http://127.0.0.1:43135 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-636468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-636468
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-783184
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-783184: exit status 85 (73.566367ms)

                                                
                                                
-- stdout --
	* Profile "addons-783184" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-783184"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-783184
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-783184: exit status 85 (65.453616ms)

                                                
                                                
-- stdout --
	* Profile "addons-783184" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-783184"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (217.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-783184 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-783184 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m37.16088743s)
--- PASS: TestAddons/Setup (217.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-783184 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-783184 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.842322ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-r9m2n" [4a0f5031-6fb0-4e60-83fd-6ae70f4d567b] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009113636s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k6vdg" [2bbc91c6-09cc-4e9b-a450-9dde1f42116b] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003424337s
addons_test.go:338: (dbg) Run:  kubectl --context addons-783184 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-783184 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-783184 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.536090032s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 ip
2024/09/24 18:44:57 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 addons disable registry --alsologtostderr -v=1
addons_test.go:386: (dbg) Done: out/minikube-linux-arm64 -p addons-783184 addons disable registry --alsologtostderr -v=1: (1.190729627s)
--- PASS: TestAddons/parallel/Registry (17.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-783184 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-783184 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-783184 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1ed5e3ff-04b4-4dac-b4db-c32bafaf4727] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1ed5e3ff-04b4-4dac-b4db-c32bafaf4727] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003633736s
I0924 18:46:10.767311  445436 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-783184 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-783184 addons disable ingress-dns --alsologtostderr -v=1: (1.143395669s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-783184 addons disable ingress --alsologtostderr -v=1: (7.856775853s)
--- PASS: TestAddons/parallel/Ingress (18.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n4hxg" [766b85f6-6e39-42ae-a8d7-d232d3903192] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.010977264s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-783184
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-783184: (5.903707257s)
--- PASS: TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 4.673485ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4ckmr" [2612b668-6d6b-42bf-983a-8f4e0af581d2] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00391839s
addons_test.go:413: (dbg) Run:  kubectl --context addons-783184 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0924 18:45:22.874053  445436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0924 18:45:22.878747  445436 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0924 18:45:22.878777  445436 kapi.go:107] duration metric: took 7.03252ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.041881ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-783184 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-783184 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1de6c37e-d644-4b09-9d65-94b512156989] Pending
helpers_test.go:344: "task-pv-pod" [1de6c37e-d644-4b09-9d65-94b512156989] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1de6c37e-d644-4b09-9d65-94b512156989] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004189699s
addons_test.go:528: (dbg) Run:  kubectl --context addons-783184 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-783184 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-783184 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-783184 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-783184 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-783184 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-783184 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b5fca5f1-6da9-40b4-a70b-d2466d234bc0] Pending
helpers_test.go:344: "task-pv-pod-restore" [b5fca5f1-6da9-40b4-a70b-d2466d234bc0] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.029525564s
addons_test.go:570: (dbg) Run:  kubectl --context addons-783184 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-783184 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-783184 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-783184 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.803231381s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-linux-arm64 -p addons-783184 addons disable volumesnapshots --alsologtostderr -v=1: (1.032298475s)
--- PASS: TestAddons/parallel/CSI (39.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-783184 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-783184 --alsologtostderr -v=1: (1.212834002s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-2jb6p" [2b01f65d-68a1-4ab1-b34e-d86c2779a3f3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-2jb6p" [2b01f65d-68a1-4ab1-b34e-d86c2779a3f3] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003818761s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-783184 addons disable headlamp --alsologtostderr -v=1: (5.84886497s)
--- PASS: TestAddons/parallel/Headlamp (17.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-zz9tw" [3cdd285f-5494-4e23-8bac-cb492622ff15] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004152605s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-783184
--- PASS: TestAddons/parallel/CloudSpanner (6.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-783184 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-783184 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783184 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3fcbab83-a128-4827-8c23-b0dd5113166b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3fcbab83-a128-4827-8c23-b0dd5113166b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3fcbab83-a128-4827-8c23-b0dd5113166b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003785768s
addons_test.go:938: (dbg) Run:  kubectl --context addons-783184 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 ssh "cat /opt/local-path-provisioner/pvc-f0779066-4112-474c-9806-43e3a9c41bce_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-783184 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-783184 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-783184 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.695943559s)
--- PASS: TestAddons/parallel/LocalPath (53.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6tcpv" [c9ba084e-6cd2-428d-82c2-c10dd5f9d5d5] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005096327s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-783184
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.74s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-psgsz" [4d831b6a-27d3-466d-8c1e-165c026402a6] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003686286s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-783184 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-783184 addons disable yakd --alsologtostderr -v=1: (5.826187084s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-783184
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-783184: (11.988106365s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-783184
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-783184
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-783184
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (36.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-100650 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-100650 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.712370849s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-100650 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-100650 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-100650 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-100650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-100650
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-100650: (1.976343214s)
--- PASS: TestCertOptions (36.33s)

                                                
                                    
x
+
TestCertExpiration (230.14s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-446360 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-446360 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.315397245s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-446360 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
E0924 19:25:48.458868  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-446360 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.55426382s)
helpers_test.go:175: Cleaning up "cert-expiration-446360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-446360
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-446360: (2.268844362s)
--- PASS: TestCertExpiration (230.14s)

                                                
                                    
x
+
TestForceSystemdFlag (34.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-517492 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-517492 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.947347716s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-517492 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-517492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-517492
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-517492: (2.452401768s)
--- PASS: TestForceSystemdFlag (34.75s)

                                                
                                    
x
+
TestForceSystemdEnv (42.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-060599 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-060599 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.525482888s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-060599 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-060599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-060599
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-060599: (2.484905281s)
--- PASS: TestForceSystemdEnv (42.39s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.72s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-009919 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-009919 --driver=docker  --container-runtime=containerd: (29.331887053s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-009919"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-e2RBR4ZTe1Cl/agent.464542" SSH_AGENT_PID="464543" DOCKER_HOST=ssh://docker@127.0.0.1:33169 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-e2RBR4ZTe1Cl/agent.464542" SSH_AGENT_PID="464543" DOCKER_HOST=ssh://docker@127.0.0.1:33169 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-e2RBR4ZTe1Cl/agent.464542" SSH_AGENT_PID="464543" DOCKER_HOST=ssh://docker@127.0.0.1:33169 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.076417063s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-e2RBR4ZTe1Cl/agent.464542" SSH_AGENT_PID="464543" DOCKER_HOST=ssh://docker@127.0.0.1:33169 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-009919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-009919
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-009919: (1.930195474s)
--- PASS: TestDockerEnvContainerd (44.72s)

                                                
                                    
x
+
TestErrorSpam/setup (28.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-642559 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-642559 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-642559 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-642559 --driver=docker  --container-runtime=containerd: (28.530542234s)
--- PASS: TestErrorSpam/setup (28.53s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 stop: (1.314774102s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-642559 --log_dir /tmp/nospam-642559 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19700-440051/.minikube/files/etc/test/nested/copy/445436/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (93.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371992 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-371992 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m33.339076506s)
--- PASS: TestFunctional/serial/StartWithProxy (93.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0924 18:49:36.983628  445436 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371992 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-371992 --alsologtostderr -v=8: (5.851364505s)
functional_test.go:663: soft start took 5.852745841s for "functional-371992" cluster.
I0924 18:49:42.835311  445436 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (5.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-371992 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 cache add registry.k8s.io/pause:3.1: (1.519116954s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 cache add registry.k8s.io/pause:3.3: (1.509163756s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 cache add registry.k8s.io/pause:latest: (1.283886477s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-371992 /tmp/TestFunctionalserialCacheCmdcacheadd_local3436648614/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 cache add minikube-local-cache-test:functional-371992
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 cache delete minikube-local-cache-test:functional-371992
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-371992
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371992 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.179547ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 cache reload: (1.093776937s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 kubectl -- --context functional-371992 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-371992 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371992 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-371992 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.87812272s)
functional_test.go:761: restart took 46.878810718s for "functional-371992" cluster.
I0924 18:50:38.224188  445436 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (46.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-371992 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 logs: (1.694712978s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 logs --file /tmp/TestFunctionalserialLogsFileCmd2401964634/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 logs --file /tmp/TestFunctionalserialLogsFileCmd2401964634/001/logs.txt: (1.762942278s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.9s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-371992 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-371992
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-371992: exit status 115 (548.279971ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31165 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-371992 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-371992 delete -f testdata/invalidsvc.yaml: (1.09015581s)
--- PASS: TestFunctional/serial/InvalidService (4.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371992 config get cpus: exit status 14 (94.467367ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371992 config get cpus: exit status 14 (60.66808ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-371992 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-371992 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 479208: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371992 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-371992 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (170.348978ms)

                                                
                                                
-- stdout --
	* [functional-371992] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:51:18.450455  478908 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:51:18.450695  478908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:51:18.450723  478908 out.go:358] Setting ErrFile to fd 2...
	I0924 18:51:18.450741  478908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:51:18.451018  478908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
	I0924 18:51:18.451442  478908 out.go:352] Setting JSON to false
	I0924 18:51:18.452467  478908 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9228,"bootTime":1727194651,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0924 18:51:18.452576  478908 start.go:139] virtualization:  
	I0924 18:51:18.454965  478908 out.go:177] * [functional-371992] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 18:51:18.458920  478908 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:51:18.459025  478908 notify.go:220] Checking for updates...
	I0924 18:51:18.462565  478908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:51:18.464982  478908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	I0924 18:51:18.466727  478908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	I0924 18:51:18.468540  478908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 18:51:18.470231  478908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:51:18.472336  478908 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 18:51:18.472884  478908 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:51:18.499150  478908 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 18:51:18.499286  478908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:51:18.559448  478908 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 18:51:18.539888035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:51:18.559562  478908 docker.go:318] overlay module found
	I0924 18:51:18.561609  478908 out.go:177] * Using the docker driver based on existing profile
	I0924 18:51:18.563467  478908 start.go:297] selected driver: docker
	I0924 18:51:18.563483  478908 start.go:901] validating driver "docker" against &{Name:functional-371992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-371992 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:51:18.563714  478908 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:51:18.566333  478908 out.go:201] 
	W0924 18:51:18.568147  478908 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0924 18:51:18.569983  478908 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371992 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371992 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-371992 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (231.914547ms)

                                                
                                                
-- stdout --
	* [functional-371992] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:51:18.224639  478801 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:51:18.224858  478801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:51:18.224869  478801 out.go:358] Setting ErrFile to fd 2...
	I0924 18:51:18.224874  478801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:51:18.225353  478801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
	I0924 18:51:18.226009  478801 out.go:352] Setting JSON to false
	I0924 18:51:18.227421  478801 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9228,"bootTime":1727194651,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0924 18:51:18.227510  478801 start.go:139] virtualization:  
	I0924 18:51:18.233097  478801 out.go:177] * [functional-371992] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0924 18:51:18.236348  478801 notify.go:220] Checking for updates...
	I0924 18:51:18.240281  478801 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:51:18.243223  478801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:51:18.246097  478801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	I0924 18:51:18.248929  478801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	I0924 18:51:18.252071  478801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 18:51:18.254263  478801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:51:18.257224  478801 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 18:51:18.258042  478801 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:51:18.293622  478801 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 18:51:18.293763  478801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:51:18.387363  478801 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 18:51:18.377169945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:51:18.387483  478801 docker.go:318] overlay module found
	I0924 18:51:18.390388  478801 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0924 18:51:18.392273  478801 start.go:297] selected driver: docker
	I0924 18:51:18.392293  478801 start.go:901] validating driver "docker" against &{Name:functional-371992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-371992 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:51:18.392419  478801 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:51:18.394800  478801 out.go:201] 
	W0924 18:51:18.396575  478801 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0924 18:51:18.398619  478801 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-371992 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-371992 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-wxj2h" [c36fc2d5-5b38-4e63-b403-55e25d095ee4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-wxj2h" [c36fc2d5-5b38-4e63-b403-55e25d095ee4] Running
E0924 18:51:02.714300  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:51:03.036067  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:51:03.678169  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:51:04.960209  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004678724s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31267
functional_test.go:1675: http://192.168.49.2:31267: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-wxj2h

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31267
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [92b6ff77-a7bf-4341-aecc-0de2574c6d41] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00526397s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-371992 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-371992 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-371992 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-371992 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e053456f-c6a8-451c-9f37-6dfe5548dd77] Pending
helpers_test.go:344: "sp-pod" [e053456f-c6a8-451c-9f37-6dfe5548dd77] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e053456f-c6a8-451c-9f37-6dfe5548dd77] Running
E0924 18:51:02.390120  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:51:02.396574  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:51:02.408186  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:51:02.429626  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:51:02.471006  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:51:02.552508  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004516117s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-371992 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-371992 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-371992 delete -f testdata/storage-provisioner/pod.yaml: (1.070530637s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-371992 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [688ad406-612d-4a7c-985f-17bdc2d8e6eb] Pending
E0924 18:51:07.521558  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [688ad406-612d-4a7c-985f-17bdc2d8e6eb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003161438s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-371992 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh -n functional-371992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 cp functional-371992:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1438106709/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh -n functional-371992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh -n functional-371992 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/445436/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo cat /etc/test/nested/copy/445436/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/445436.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo cat /etc/ssl/certs/445436.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/445436.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo cat /usr/share/ca-certificates/445436.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/4454362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo cat /etc/ssl/certs/4454362.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/4454362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo cat /usr/share/ca-certificates/4454362.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-371992 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371992 ssh "sudo systemctl is-active docker": exit status 1 (273.62707ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371992 ssh "sudo systemctl is-active crio": exit status 1 (378.886878ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-371992 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-371992 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-371992 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 476498: os: process already finished
helpers_test.go:502: unable to terminate pid 476304: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-371992 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-371992 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-371992 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5d0a03f1-cd4d-4f76-b930-957711f79ae3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5d0a03f1-cd4d-4f76-b930-957711f79ae3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003967721s
I0924 18:50:57.942757  445436 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-371992 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.181.241 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-371992 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-371992 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-371992 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-pll7b" [d82cd949-3c62-49a3-8063-9e02c29cb6b5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-pll7b" [d82cd949-3c62-49a3-8063-9e02c29cb6b5] Running
E0924 18:51:12.643702  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003474035s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "495.981524ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "64.224044ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 service list -o json
functional_test.go:1494: Took "639.408379ms" to run "out/minikube-linux-arm64 -p functional-371992 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "388.567646ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "91.193272ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30826
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdany-port1551609702/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727203875693836591" to /tmp/TestFunctionalparallelMountCmdany-port1551609702/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727203875693836591" to /tmp/TestFunctionalparallelMountCmdany-port1551609702/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727203875693836591" to /tmp/TestFunctionalparallelMountCmdany-port1551609702/001/test-1727203875693836591
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (435.502435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 18:51:16.130284  445436 retry.go:31] will retry after 484.125957ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 24 18:51 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 24 18:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 24 18:51 test-1727203875693836591
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh cat /mount-9p/test-1727203875693836591
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-371992 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [13e25a5f-2b80-4411-9768-6e8aff659633] Pending
helpers_test.go:344: "busybox-mount" [13e25a5f-2b80-4411-9768-6e8aff659633] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [13e25a5f-2b80-4411-9768-6e8aff659633] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0924 18:51:22.885308  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [13e25a5f-2b80-4411-9768-6e8aff659633] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00458713s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-371992 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdany-port1551609702/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30826
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdspecific-port89276082/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (539.051703ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 18:51:24.632429  445436 retry.go:31] will retry after 704.840671ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdspecific-port89276082/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371992 ssh "sudo umount -f /mount-9p": exit status 1 (336.632125ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-371992 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdspecific-port89276082/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3095173064/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3095173064/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3095173064/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T" /mount1
2024/09/24 18:51:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T" /mount1: exit status 1 (854.4757ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 18:51:27.411885  445436 retry.go:31] will retry after 472.39972ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-371992 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3095173064/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3095173064/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3095173064/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 version -o=json --components: (1.251526959s)
--- PASS: TestFunctional/parallel/Version/components (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-371992 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-371992
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-371992
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-371992 image ls --format short --alsologtostderr:
I0924 18:51:35.467852  481774 out.go:345] Setting OutFile to fd 1 ...
I0924 18:51:35.468544  481774 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:51:35.468557  481774 out.go:358] Setting ErrFile to fd 2...
I0924 18:51:35.468567  481774 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:51:35.468839  481774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
I0924 18:51:35.469575  481774 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 18:51:35.469735  481774 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 18:51:35.470243  481774 cli_runner.go:164] Run: docker container inspect functional-371992 --format={{.State.Status}}
I0924 18:51:35.489359  481774 ssh_runner.go:195] Run: systemctl --version
I0924 18:51:35.489481  481774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-371992
I0924 18:51:35.512490  481774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/functional-371992/id_rsa Username:docker}
I0924 18:51:35.615137  481774 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-371992 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-371992  | sha256:85e8f6 | 989B   |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-371992  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-371992 image ls --format table --alsologtostderr:
I0924 18:51:36.040931  481927 out.go:345] Setting OutFile to fd 1 ...
I0924 18:51:36.041211  481927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:51:36.041243  481927 out.go:358] Setting ErrFile to fd 2...
I0924 18:51:36.041280  481927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:51:36.041794  481927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
I0924 18:51:36.043254  481927 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 18:51:36.045120  481927 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 18:51:36.046381  481927 cli_runner.go:164] Run: docker container inspect functional-371992 --format={{.State.Status}}
I0924 18:51:36.073024  481927 ssh_runner.go:195] Run: systemctl --version
I0924 18:51:36.073106  481927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-371992
I0924 18:51:36.104846  481927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/functional-371992/id_rsa Username:docker}
I0924 18:51:36.202020  481927 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-371992 image ls --format json --alsologtostderr:
[{"id":"sha256:85e8f6267b851fadd80e941f6f3fbed11848b07a6cbdae59c2452854432ee9d2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-371992"],"size":"989"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8a
c4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"2675681
2"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e
5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-371992"],"size":"2173567"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/
k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-371992 image ls --format json --alsologtostderr:
I0924 18:51:35.752360  481839 out.go:345] Setting OutFile to fd 1 ...
I0924 18:51:35.752531  481839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:51:35.752558  481839 out.go:358] Setting ErrFile to fd 2...
I0924 18:51:35.752583  481839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:51:35.752861  481839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
I0924 18:51:35.753757  481839 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 18:51:35.753940  481839 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 18:51:35.754492  481839 cli_runner.go:164] Run: docker container inspect functional-371992 --format={{.State.Status}}
I0924 18:51:35.778459  481839 ssh_runner.go:195] Run: systemctl --version
I0924 18:51:35.778519  481839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-371992
I0924 18:51:35.816141  481839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/functional-371992/id_rsa Username:docker}
I0924 18:51:35.919410  481839 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-371992 image ls --format yaml --alsologtostderr:
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:85e8f6267b851fadd80e941f6f3fbed11848b07a6cbdae59c2452854432ee9d2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-371992
size: "989"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-371992
size: "2173567"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-371992 image ls --format yaml --alsologtostderr:
I0924 18:51:35.470987  481775 out.go:345] Setting OutFile to fd 1 ...
I0924 18:51:35.471176  481775 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:51:35.471203  481775 out.go:358] Setting ErrFile to fd 2...
I0924 18:51:35.471223  481775 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:51:35.471556  481775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
I0924 18:51:35.472239  481775 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 18:51:35.472421  481775 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 18:51:35.473058  481775 cli_runner.go:164] Run: docker container inspect functional-371992 --format={{.State.Status}}
I0924 18:51:35.495704  481775 ssh_runner.go:195] Run: systemctl --version
I0924 18:51:35.495770  481775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-371992
I0924 18:51:35.523017  481775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/functional-371992/id_rsa Username:docker}
I0924 18:51:35.619774  481775 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371992 ssh pgrep buildkitd: exit status 1 (333.415039ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image build -t localhost/my-image:functional-371992 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 image build -t localhost/my-image:functional-371992 testdata/build --alsologtostderr: (3.325712226s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-371992 image build -t localhost/my-image:functional-371992 testdata/build --alsologtostderr:
I0924 18:51:36.092078  481933 out.go:345] Setting OutFile to fd 1 ...
I0924 18:51:36.092726  481933 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:51:36.092737  481933 out.go:358] Setting ErrFile to fd 2...
I0924 18:51:36.092743  481933 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:51:36.093044  481933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
I0924 18:51:36.093907  481933 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 18:51:36.094736  481933 config.go:182] Loaded profile config "functional-371992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 18:51:36.095284  481933 cli_runner.go:164] Run: docker container inspect functional-371992 --format={{.State.Status}}
I0924 18:51:36.126299  481933 ssh_runner.go:195] Run: systemctl --version
I0924 18:51:36.126359  481933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-371992
I0924 18:51:36.149739  481933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/functional-371992/id_rsa Username:docker}
I0924 18:51:36.246373  481933 build_images.go:161] Building image from path: /tmp/build.4157621091.tar
I0924 18:51:36.246451  481933 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0924 18:51:36.258739  481933 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4157621091.tar
I0924 18:51:36.262497  481933 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4157621091.tar: stat -c "%s %y" /var/lib/minikube/build/build.4157621091.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4157621091.tar': No such file or directory
I0924 18:51:36.262530  481933 ssh_runner.go:362] scp /tmp/build.4157621091.tar --> /var/lib/minikube/build/build.4157621091.tar (3072 bytes)
I0924 18:51:36.288608  481933 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4157621091
I0924 18:51:36.299061  481933 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4157621091 -xf /var/lib/minikube/build/build.4157621091.tar
I0924 18:51:36.308426  481933 containerd.go:394] Building image: /var/lib/minikube/build/build.4157621091
I0924 18:51:36.308501  481933 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4157621091 --local dockerfile=/var/lib/minikube/build/build.4157621091 --output type=image,name=localhost/my-image:functional-371992
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:2d5165a2cff6d1b5d4aaaae9bbbc267425ea8cea743295da2a72f2eeb64e603e 0.0s done
#8 exporting config sha256:ec25096e6b5aa5addb017b45e409d251db4356a84598d3a8e5ddca9607921903 0.0s done
#8 naming to localhost/my-image:functional-371992 done
#8 DONE 0.2s
I0924 18:51:39.302174  481933 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4157621091 --local dockerfile=/var/lib/minikube/build/build.4157621091 --output type=image,name=localhost/my-image:functional-371992: (2.993638465s)
I0924 18:51:39.302248  481933 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4157621091
I0924 18:51:39.313543  481933 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4157621091.tar
I0924 18:51:39.323601  481933 build_images.go:217] Built localhost/my-image:functional-371992 from /tmp/build.4157621091.tar
I0924 18:51:39.323628  481933 build_images.go:133] succeeded building to: functional-371992
I0924 18:51:39.323634  481933 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-371992
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image load --daemon kicbase/echo-server:functional-371992 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 image load --daemon kicbase/echo-server:functional-371992 --alsologtostderr: (1.212486809s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image load --daemon kicbase/echo-server:functional-371992 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 image load --daemon kicbase/echo-server:functional-371992 --alsologtostderr: (1.060638675s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-371992
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image load --daemon kicbase/echo-server:functional-371992 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-371992 image load --daemon kicbase/echo-server:functional-371992 --alsologtostderr: (1.076424395s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image save kicbase/echo-server:functional-371992 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image rm kicbase/echo-server:functional-371992 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-371992
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-371992 image save --daemon kicbase/echo-server:functional-371992 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-371992
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-371992
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-371992
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-371992
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (131.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-217124 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0924 18:51:43.366955  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:52:24.328253  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:53:46.249813  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-217124 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m10.668034351s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (131.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-217124 -- rollout status deployment/busybox: (27.793345652s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-6z8h2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-fwh2f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-qblxh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-6z8h2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-fwh2f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-qblxh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-6z8h2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-fwh2f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-qblxh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-6z8h2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-6z8h2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-fwh2f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-fwh2f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-qblxh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-217124 -- exec busybox-7dff88458-qblxh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-217124 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-217124 -v=7 --alsologtostderr: (20.645406393s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr: (1.015787611s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-217124 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp testdata/cp-test.txt ha-217124:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1083551541/001/cp-test_ha-217124.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124:/home/docker/cp-test.txt ha-217124-m02:/home/docker/cp-test_ha-217124_ha-217124-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m02 "sudo cat /home/docker/cp-test_ha-217124_ha-217124-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124:/home/docker/cp-test.txt ha-217124-m03:/home/docker/cp-test_ha-217124_ha-217124-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m03 "sudo cat /home/docker/cp-test_ha-217124_ha-217124-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124:/home/docker/cp-test.txt ha-217124-m04:/home/docker/cp-test_ha-217124_ha-217124-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m04 "sudo cat /home/docker/cp-test_ha-217124_ha-217124-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp testdata/cp-test.txt ha-217124-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1083551541/001/cp-test_ha-217124-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m02:/home/docker/cp-test.txt ha-217124:/home/docker/cp-test_ha-217124-m02_ha-217124.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124 "sudo cat /home/docker/cp-test_ha-217124-m02_ha-217124.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m02:/home/docker/cp-test.txt ha-217124-m03:/home/docker/cp-test_ha-217124-m02_ha-217124-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m03 "sudo cat /home/docker/cp-test_ha-217124-m02_ha-217124-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m02:/home/docker/cp-test.txt ha-217124-m04:/home/docker/cp-test_ha-217124-m02_ha-217124-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m04 "sudo cat /home/docker/cp-test_ha-217124-m02_ha-217124-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp testdata/cp-test.txt ha-217124-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1083551541/001/cp-test_ha-217124-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m03:/home/docker/cp-test.txt ha-217124:/home/docker/cp-test_ha-217124-m03_ha-217124.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124 "sudo cat /home/docker/cp-test_ha-217124-m03_ha-217124.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m03:/home/docker/cp-test.txt ha-217124-m02:/home/docker/cp-test_ha-217124-m03_ha-217124-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m02 "sudo cat /home/docker/cp-test_ha-217124-m03_ha-217124-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m03:/home/docker/cp-test.txt ha-217124-m04:/home/docker/cp-test_ha-217124-m03_ha-217124-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m04 "sudo cat /home/docker/cp-test_ha-217124-m03_ha-217124-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp testdata/cp-test.txt ha-217124-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1083551541/001/cp-test_ha-217124-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m04:/home/docker/cp-test.txt ha-217124:/home/docker/cp-test_ha-217124-m04_ha-217124.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124 "sudo cat /home/docker/cp-test_ha-217124-m04_ha-217124.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m04:/home/docker/cp-test.txt ha-217124-m02:/home/docker/cp-test_ha-217124-m04_ha-217124-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m02 "sudo cat /home/docker/cp-test_ha-217124-m04_ha-217124-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 cp ha-217124-m04:/home/docker/cp-test.txt ha-217124-m03:/home/docker/cp-test_ha-217124-m04_ha-217124-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 ssh -n ha-217124-m03 "sudo cat /home/docker/cp-test_ha-217124-m04_ha-217124-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-217124 node stop m02 -v=7 --alsologtostderr: (12.084200963s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr: exit status 7 (748.016659ms)

                                                
                                                
-- stdout --
	ha-217124
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-217124-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217124-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-217124-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:55:20.580126  498163 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:55:20.580312  498163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:55:20.580325  498163 out.go:358] Setting ErrFile to fd 2...
	I0924 18:55:20.580331  498163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:55:20.580667  498163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
	I0924 18:55:20.580896  498163 out.go:352] Setting JSON to false
	I0924 18:55:20.580925  498163 mustload.go:65] Loading cluster: ha-217124
	I0924 18:55:20.580972  498163 notify.go:220] Checking for updates...
	I0924 18:55:20.581513  498163 config.go:182] Loaded profile config "ha-217124": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 18:55:20.581535  498163 status.go:174] checking status of ha-217124 ...
	I0924 18:55:20.582331  498163 cli_runner.go:164] Run: docker container inspect ha-217124 --format={{.State.Status}}
	I0924 18:55:20.602347  498163 status.go:364] ha-217124 host status = "Running" (err=<nil>)
	I0924 18:55:20.602373  498163 host.go:66] Checking if "ha-217124" exists ...
	I0924 18:55:20.602683  498163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-217124
	I0924 18:55:20.626757  498163 host.go:66] Checking if "ha-217124" exists ...
	I0924 18:55:20.627072  498163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 18:55:20.627130  498163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-217124
	I0924 18:55:20.643750  498163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/ha-217124/id_rsa Username:docker}
	I0924 18:55:20.739465  498163 ssh_runner.go:195] Run: systemctl --version
	I0924 18:55:20.744649  498163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:55:20.757164  498163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:55:20.827265  498163 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-24 18:55:20.816927846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:55:20.827953  498163 kubeconfig.go:125] found "ha-217124" server: "https://192.168.49.254:8443"
	I0924 18:55:20.827990  498163 api_server.go:166] Checking apiserver status ...
	I0924 18:55:20.828040  498163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:55:20.840840  498163 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	I0924 18:55:20.850891  498163 api_server.go:182] apiserver freezer: "8:freezer:/docker/1c21ea2c2a3cac10d02a6adcd1ba2dfae7f6b35e2ce6253e08aec1155f561ca1/kubepods/burstable/podc971995a567ab0d31141f8b20b9d8b1b/b077b0e2719f55be52d032a9dac1026dabf33a0f204fcca8d5253e48900d35f5"
	I0924 18:55:20.850982  498163 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1c21ea2c2a3cac10d02a6adcd1ba2dfae7f6b35e2ce6253e08aec1155f561ca1/kubepods/burstable/podc971995a567ab0d31141f8b20b9d8b1b/b077b0e2719f55be52d032a9dac1026dabf33a0f204fcca8d5253e48900d35f5/freezer.state
	I0924 18:55:20.859906  498163 api_server.go:204] freezer state: "THAWED"
	I0924 18:55:20.859935  498163 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0924 18:55:20.867819  498163 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0924 18:55:20.867845  498163 status.go:456] ha-217124 apiserver status = Running (err=<nil>)
	I0924 18:55:20.867857  498163 status.go:176] ha-217124 status: &{Name:ha-217124 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:55:20.867873  498163 status.go:174] checking status of ha-217124-m02 ...
	I0924 18:55:20.868186  498163 cli_runner.go:164] Run: docker container inspect ha-217124-m02 --format={{.State.Status}}
	I0924 18:55:20.885876  498163 status.go:364] ha-217124-m02 host status = "Stopped" (err=<nil>)
	I0924 18:55:20.885899  498163 status.go:377] host is not running, skipping remaining checks
	I0924 18:55:20.885906  498163 status.go:176] ha-217124-m02 status: &{Name:ha-217124-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:55:20.885926  498163 status.go:174] checking status of ha-217124-m03 ...
	I0924 18:55:20.886259  498163 cli_runner.go:164] Run: docker container inspect ha-217124-m03 --format={{.State.Status}}
	I0924 18:55:20.908116  498163 status.go:364] ha-217124-m03 host status = "Running" (err=<nil>)
	I0924 18:55:20.908143  498163 host.go:66] Checking if "ha-217124-m03" exists ...
	I0924 18:55:20.908457  498163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-217124-m03
	I0924 18:55:20.927270  498163 host.go:66] Checking if "ha-217124-m03" exists ...
	I0924 18:55:20.927593  498163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 18:55:20.927654  498163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-217124-m03
	I0924 18:55:20.946985  498163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/ha-217124-m03/id_rsa Username:docker}
	I0924 18:55:21.045549  498163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:55:21.062692  498163 kubeconfig.go:125] found "ha-217124" server: "https://192.168.49.254:8443"
	I0924 18:55:21.062727  498163 api_server.go:166] Checking apiserver status ...
	I0924 18:55:21.062771  498163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:55:21.076474  498163 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1330/cgroup
	I0924 18:55:21.088959  498163 api_server.go:182] apiserver freezer: "8:freezer:/docker/f05f5c049e1da8576f1c434797b345b84a708a31c85d1a6f6a6cc019935f7f85/kubepods/burstable/pode525c01ed76b85f6f826943153b3b161/309f54fc6a76e09873c07888572f10421ddb302e688b068626e6d474981605c7"
	I0924 18:55:21.089111  498163 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f05f5c049e1da8576f1c434797b345b84a708a31c85d1a6f6a6cc019935f7f85/kubepods/burstable/pode525c01ed76b85f6f826943153b3b161/309f54fc6a76e09873c07888572f10421ddb302e688b068626e6d474981605c7/freezer.state
	I0924 18:55:21.099710  498163 api_server.go:204] freezer state: "THAWED"
	I0924 18:55:21.099752  498163 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0924 18:55:21.108393  498163 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0924 18:55:21.108431  498163 status.go:456] ha-217124-m03 apiserver status = Running (err=<nil>)
	I0924 18:55:21.108469  498163 status.go:176] ha-217124-m03 status: &{Name:ha-217124-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:55:21.108495  498163 status.go:174] checking status of ha-217124-m04 ...
	I0924 18:55:21.108854  498163 cli_runner.go:164] Run: docker container inspect ha-217124-m04 --format={{.State.Status}}
	I0924 18:55:21.133776  498163 status.go:364] ha-217124-m04 host status = "Running" (err=<nil>)
	I0924 18:55:21.133805  498163 host.go:66] Checking if "ha-217124-m04" exists ...
	I0924 18:55:21.134137  498163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-217124-m04
	I0924 18:55:21.153849  498163 host.go:66] Checking if "ha-217124-m04" exists ...
	I0924 18:55:21.154183  498163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 18:55:21.154232  498163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-217124-m04
	I0924 18:55:21.171937  498163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/ha-217124-m04/id_rsa Username:docker}
	I0924 18:55:21.266607  498163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:55:21.279396  498163 status.go:176] ha-217124-m04 status: &{Name:ha-217124-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 node start m02 -v=7 --alsologtostderr
E0924 18:55:48.459512  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:55:48.465856  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:55:48.477440  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:55:48.498710  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:55:48.540122  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:55:48.621554  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-217124 node start m02 -v=7 --alsologtostderr: (26.624388147s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr
E0924 18:55:48.783556  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:55:49.105540  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:55:49.749541  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr: (1.243694973s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0924 18:55:51.030932  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.052924561s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-217124 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-217124 -v=7 --alsologtostderr
E0924 18:55:53.592379  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:55:58.714526  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:56:02.387778  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:56:08.956780  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-217124 -v=7 --alsologtostderr: (37.07447639s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-217124 --wait=true -v=7 --alsologtostderr
E0924 18:56:29.438631  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:56:30.091658  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:57:10.400332  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-217124 --wait=true -v=7 --alsologtostderr: (1m32.833009094s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-217124
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-217124 node delete m03 -v=7 --alsologtostderr: (9.809131409s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 stop -v=7 --alsologtostderr
E0924 18:58:32.324165  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-217124 stop -v=7 --alsologtostderr: (36.053323195s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr: exit status 7 (123.895736ms)

                                                
                                                
-- stdout --
	ha-217124
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217124-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217124-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:58:48.775833  512465 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:58:48.776024  512465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:58:48.776038  512465 out.go:358] Setting ErrFile to fd 2...
	I0924 18:58:48.776044  512465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:58:48.776379  512465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
	I0924 18:58:48.776625  512465 out.go:352] Setting JSON to false
	I0924 18:58:48.776666  512465 mustload.go:65] Loading cluster: ha-217124
	I0924 18:58:48.776771  512465 notify.go:220] Checking for updates...
	I0924 18:58:48.777211  512465 config.go:182] Loaded profile config "ha-217124": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 18:58:48.777238  512465 status.go:174] checking status of ha-217124 ...
	I0924 18:58:48.778272  512465 cli_runner.go:164] Run: docker container inspect ha-217124 --format={{.State.Status}}
	I0924 18:58:48.797954  512465 status.go:364] ha-217124 host status = "Stopped" (err=<nil>)
	I0924 18:58:48.797977  512465 status.go:377] host is not running, skipping remaining checks
	I0924 18:58:48.797984  512465 status.go:176] ha-217124 status: &{Name:ha-217124 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:58:48.798030  512465 status.go:174] checking status of ha-217124-m02 ...
	I0924 18:58:48.798371  512465 cli_runner.go:164] Run: docker container inspect ha-217124-m02 --format={{.State.Status}}
	I0924 18:58:48.824691  512465 status.go:364] ha-217124-m02 host status = "Stopped" (err=<nil>)
	I0924 18:58:48.824716  512465 status.go:377] host is not running, skipping remaining checks
	I0924 18:58:48.824723  512465 status.go:176] ha-217124-m02 status: &{Name:ha-217124-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:58:48.824742  512465 status.go:174] checking status of ha-217124-m04 ...
	I0924 18:58:48.825096  512465 cli_runner.go:164] Run: docker container inspect ha-217124-m04 --format={{.State.Status}}
	I0924 18:58:48.846419  512465 status.go:364] ha-217124-m04 host status = "Stopped" (err=<nil>)
	I0924 18:58:48.846441  512465 status.go:377] host is not running, skipping remaining checks
	I0924 18:58:48.846449  512465 status.go:176] ha-217124-m04 status: &{Name:ha-217124-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-217124 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-217124 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.904548473s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-217124 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-217124 --control-plane -v=7 --alsologtostderr: (41.711427774s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-217124 status -v=7 --alsologtostderr: (1.019367592s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.108755703s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-810944 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0924 19:01:02.387795  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:01:16.166231  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-810944 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (48.530718829s)
--- PASS: TestJSONOutput/start/Command (48.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-810944 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-810944 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-810944 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-810944 --output=json --user=testUser: (5.849624991s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-494464 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-494464 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.39588ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"22576f1f-7b33-4ca6-b87c-ae45a8964cad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-494464] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"337eecd4-edcf-49c8-bf67-c8421a064a2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19700"}}
	{"specversion":"1.0","id":"a51bd837-a346-4626-a4be-a892b36ca2e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aa816ec8-ae82-41c7-aa1e-cc2e11749e21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig"}}
	{"specversion":"1.0","id":"323ad606-ddf4-4468-8748-440c06f3f79f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube"}}
	{"specversion":"1.0","id":"a8288cc2-2b93-4094-9f4f-73ffccd0e02a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"db6e1179-d3b5-4e35-8171-306fe885111c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1eb50508-6094-4c54-8e0b-a5f572211c66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-494464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-494464
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-356441 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-356441 --network=: (37.480011609s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-356441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-356441
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-356441: (2.182492085s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.69s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-016851 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-016851 --network=bridge: (31.819445147s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-016851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-016851
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-016851: (1.947989684s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.79s)

                                                
                                    
x
+
TestKicExistingNetwork (34.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0924 19:03:03.897984  445436 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0924 19:03:03.913460  445436 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0924 19:03:03.913558  445436 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0924 19:03:03.913576  445436 cli_runner.go:164] Run: docker network inspect existing-network
W0924 19:03:03.929126  445436 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0924 19:03:03.929156  445436 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0924 19:03:03.929175  445436 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0924 19:03:03.929283  445436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0924 19:03:03.945864  445436 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94de42590218 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6d:61:23:e3} reservation:<nil>}
I0924 19:03:03.946220  445436 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400047d220}
I0924 19:03:03.946244  445436 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0924 19:03:03.946298  445436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0924 19:03:04.034685  445436 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-709841 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-709841 --network=existing-network: (32.465215061s)
helpers_test.go:175: Cleaning up "existing-network-709841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-709841
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-709841: (2.012506688s)
I0924 19:03:38.528619  445436 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.65s)

                                                
                                    
x
+
TestKicCustomSubnet (33.09s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-234637 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-234637 --subnet=192.168.60.0/24: (31.017253114s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-234637 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-234637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-234637
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-234637: (2.050161109s)
--- PASS: TestKicCustomSubnet (33.09s)

                                                
                                    
x
+
TestKicStaticIP (35.14s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-889824 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-889824 --static-ip=192.168.200.200: (32.884427364s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-889824 ip
helpers_test.go:175: Cleaning up "static-ip-889824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-889824
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-889824: (2.097754881s)
--- PASS: TestKicStaticIP (35.14s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-825133 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-825133 --driver=docker  --container-runtime=containerd: (30.297144526s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-827783 --driver=docker  --container-runtime=containerd
E0924 19:05:48.459448  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-827783 --driver=docker  --container-runtime=containerd: (36.419593632s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-825133
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-827783
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-827783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-827783
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-827783: (2.031251216s)
helpers_test.go:175: Cleaning up "first-825133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-825133
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-825133: (1.913431121s)
--- PASS: TestMinikubeProfile (71.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-991051 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0924 19:06:02.387700  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-991051 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.189440208s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-991051 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-992987 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-992987 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.929317555s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-992987 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-991051 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-991051 --alsologtostderr -v=5: (1.612581102s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-992987 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-992987
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-992987: (1.208362798s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-992987
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-992987: (6.88424964s)
--- PASS: TestMountStart/serial/RestartStopped (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-992987 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-882911 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0924 19:07:25.453577  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-882911 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.559178871s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-882911 -- rollout status deployment/busybox: (15.456641828s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- exec busybox-7dff88458-hmkm9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- exec busybox-7dff88458-lsfrf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- exec busybox-7dff88458-hmkm9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- exec busybox-7dff88458-lsfrf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- exec busybox-7dff88458-hmkm9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- exec busybox-7dff88458-lsfrf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- exec busybox-7dff88458-hmkm9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- exec busybox-7dff88458-hmkm9 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- exec busybox-7dff88458-lsfrf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-882911 -- exec busybox-7dff88458-lsfrf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-882911 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-882911 -v 3 --alsologtostderr: (18.068633221s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.72s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-882911 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp testdata/cp-test.txt multinode-882911:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp multinode-882911:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3483247079/001/cp-test_multinode-882911.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp multinode-882911:/home/docker/cp-test.txt multinode-882911-m02:/home/docker/cp-test_multinode-882911_multinode-882911-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m02 "sudo cat /home/docker/cp-test_multinode-882911_multinode-882911-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp multinode-882911:/home/docker/cp-test.txt multinode-882911-m03:/home/docker/cp-test_multinode-882911_multinode-882911-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m03 "sudo cat /home/docker/cp-test_multinode-882911_multinode-882911-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp testdata/cp-test.txt multinode-882911-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp multinode-882911-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3483247079/001/cp-test_multinode-882911-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp multinode-882911-m02:/home/docker/cp-test.txt multinode-882911:/home/docker/cp-test_multinode-882911-m02_multinode-882911.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911 "sudo cat /home/docker/cp-test_multinode-882911-m02_multinode-882911.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp multinode-882911-m02:/home/docker/cp-test.txt multinode-882911-m03:/home/docker/cp-test_multinode-882911-m02_multinode-882911-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m03 "sudo cat /home/docker/cp-test_multinode-882911-m02_multinode-882911-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp testdata/cp-test.txt multinode-882911-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp multinode-882911-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3483247079/001/cp-test_multinode-882911-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp multinode-882911-m03:/home/docker/cp-test.txt multinode-882911:/home/docker/cp-test_multinode-882911-m03_multinode-882911.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911 "sudo cat /home/docker/cp-test_multinode-882911-m03_multinode-882911.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 cp multinode-882911-m03:/home/docker/cp-test.txt multinode-882911-m02:/home/docker/cp-test_multinode-882911-m03_multinode-882911-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 ssh -n multinode-882911-m02 "sudo cat /home/docker/cp-test_multinode-882911-m03_multinode-882911-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-882911 node stop m03: (1.234096811s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-882911 status: exit status 7 (547.068943ms)

                                                
                                                
-- stdout --
	multinode-882911
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-882911-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-882911-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-882911 status --alsologtostderr: exit status 7 (512.74753ms)

                                                
                                                
-- stdout --
	multinode-882911
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-882911-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-882911-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:08:20.186764  565877 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:08:20.186926  565877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:08:20.186938  565877 out.go:358] Setting ErrFile to fd 2...
	I0924 19:08:20.186961  565877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:08:20.187239  565877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
	I0924 19:08:20.187466  565877 out.go:352] Setting JSON to false
	I0924 19:08:20.187526  565877 mustload.go:65] Loading cluster: multinode-882911
	I0924 19:08:20.187600  565877 notify.go:220] Checking for updates...
	I0924 19:08:20.188013  565877 config.go:182] Loaded profile config "multinode-882911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 19:08:20.188033  565877 status.go:174] checking status of multinode-882911 ...
	I0924 19:08:20.188657  565877 cli_runner.go:164] Run: docker container inspect multinode-882911 --format={{.State.Status}}
	I0924 19:08:20.208234  565877 status.go:364] multinode-882911 host status = "Running" (err=<nil>)
	I0924 19:08:20.208258  565877 host.go:66] Checking if "multinode-882911" exists ...
	I0924 19:08:20.208608  565877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-882911
	I0924 19:08:20.239544  565877 host.go:66] Checking if "multinode-882911" exists ...
	I0924 19:08:20.239867  565877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 19:08:20.239909  565877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-882911
	I0924 19:08:20.261507  565877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33304 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/multinode-882911/id_rsa Username:docker}
	I0924 19:08:20.358781  565877 ssh_runner.go:195] Run: systemctl --version
	I0924 19:08:20.363025  565877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:08:20.375057  565877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 19:08:20.428812  565877 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-24 19:08:20.418079587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 19:08:20.429407  565877 kubeconfig.go:125] found "multinode-882911" server: "https://192.168.67.2:8443"
	I0924 19:08:20.429458  565877 api_server.go:166] Checking apiserver status ...
	I0924 19:08:20.429511  565877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:08:20.441585  565877 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1380/cgroup
	I0924 19:08:20.451613  565877 api_server.go:182] apiserver freezer: "8:freezer:/docker/431e788706cad41af2eacb9fdd419c950a27a9ec7f12135c056016eb240d6792/kubepods/burstable/pod47a296902f126005ae56d2b9e77f9799/e9e25772dadd3bb71423001e6e416abb0030f47c716a64ad0d64473cbf89304a"
	I0924 19:08:20.451689  565877 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/431e788706cad41af2eacb9fdd419c950a27a9ec7f12135c056016eb240d6792/kubepods/burstable/pod47a296902f126005ae56d2b9e77f9799/e9e25772dadd3bb71423001e6e416abb0030f47c716a64ad0d64473cbf89304a/freezer.state
	I0924 19:08:20.460412  565877 api_server.go:204] freezer state: "THAWED"
	I0924 19:08:20.460441  565877 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0924 19:08:20.468252  565877 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0924 19:08:20.468280  565877 status.go:456] multinode-882911 apiserver status = Running (err=<nil>)
	I0924 19:08:20.468292  565877 status.go:176] multinode-882911 status: &{Name:multinode-882911 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 19:08:20.468310  565877 status.go:174] checking status of multinode-882911-m02 ...
	I0924 19:08:20.468651  565877 cli_runner.go:164] Run: docker container inspect multinode-882911-m02 --format={{.State.Status}}
	I0924 19:08:20.485233  565877 status.go:364] multinode-882911-m02 host status = "Running" (err=<nil>)
	I0924 19:08:20.485261  565877 host.go:66] Checking if "multinode-882911-m02" exists ...
	I0924 19:08:20.485785  565877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-882911-m02
	I0924 19:08:20.502207  565877 host.go:66] Checking if "multinode-882911-m02" exists ...
	I0924 19:08:20.502566  565877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 19:08:20.502616  565877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-882911-m02
	I0924 19:08:20.519746  565877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33309 SSHKeyPath:/home/jenkins/minikube-integration/19700-440051/.minikube/machines/multinode-882911-m02/id_rsa Username:docker}
	I0924 19:08:20.610835  565877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:08:20.622230  565877 status.go:176] multinode-882911-m02 status: &{Name:multinode-882911-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0924 19:08:20.622265  565877 status.go:174] checking status of multinode-882911-m03 ...
	I0924 19:08:20.622591  565877 cli_runner.go:164] Run: docker container inspect multinode-882911-m03 --format={{.State.Status}}
	I0924 19:08:20.639644  565877 status.go:364] multinode-882911-m03 host status = "Stopped" (err=<nil>)
	I0924 19:08:20.639670  565877 status.go:377] host is not running, skipping remaining checks
	I0924 19:08:20.639678  565877 status.go:176] multinode-882911-m03 status: &{Name:multinode-882911-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-882911 node start m03 -v=7 --alsologtostderr: (8.94192296s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-882911
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-882911
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-882911: (25.192499122s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-882911 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-882911 --wait=true -v=8 --alsologtostderr: (1m3.525231356s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-882911
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-882911 node delete m03: (5.327496515s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-882911 stop: (23.79428181s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-882911 status: exit status 7 (81.658663ms)

                                                
                                                
-- stdout --
	multinode-882911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-882911-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-882911 status --alsologtostderr: exit status 7 (105.368687ms)

                                                
                                                
-- stdout --
	multinode-882911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-882911-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:10:29.197900  574211 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:10:29.198112  574211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:10:29.198137  574211 out.go:358] Setting ErrFile to fd 2...
	I0924 19:10:29.198156  574211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:10:29.198463  574211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
	I0924 19:10:29.198689  574211 out.go:352] Setting JSON to false
	I0924 19:10:29.198742  574211 mustload.go:65] Loading cluster: multinode-882911
	I0924 19:10:29.198773  574211 notify.go:220] Checking for updates...
	I0924 19:10:29.199210  574211 config.go:182] Loaded profile config "multinode-882911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 19:10:29.199255  574211 status.go:174] checking status of multinode-882911 ...
	I0924 19:10:29.200085  574211 cli_runner.go:164] Run: docker container inspect multinode-882911 --format={{.State.Status}}
	I0924 19:10:29.218075  574211 status.go:364] multinode-882911 host status = "Stopped" (err=<nil>)
	I0924 19:10:29.218096  574211 status.go:377] host is not running, skipping remaining checks
	I0924 19:10:29.218102  574211 status.go:176] multinode-882911 status: &{Name:multinode-882911 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 19:10:29.218133  574211 status.go:174] checking status of multinode-882911-m02 ...
	I0924 19:10:29.218451  574211 cli_runner.go:164] Run: docker container inspect multinode-882911-m02 --format={{.State.Status}}
	I0924 19:10:29.249773  574211 status.go:364] multinode-882911-m02 host status = "Stopped" (err=<nil>)
	I0924 19:10:29.249795  574211 status.go:377] host is not running, skipping remaining checks
	I0924 19:10:29.249803  574211 status.go:176] multinode-882911-m02 status: &{Name:multinode-882911-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-882911 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0924 19:10:48.459121  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:11:02.387303  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-882911 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (55.316964134s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-882911 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.06s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-882911
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-882911-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-882911-m02 --driver=docker  --container-runtime=containerd: exit status 14 (81.921236ms)

                                                
                                                
-- stdout --
	* [multinode-882911-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-882911-m02' is duplicated with machine name 'multinode-882911-m02' in profile 'multinode-882911'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-882911-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-882911-m03 --driver=docker  --container-runtime=containerd: (30.662522521s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-882911
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-882911: exit status 80 (358.972199ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-882911 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-882911-m03 already exists in multinode-882911-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-882911-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-882911-m03: (1.914454493s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.08s)

                                                
                                    
x
+
TestPreload (124.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-573330 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0924 19:12:11.527658  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-573330 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m26.295779538s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-573330 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-573330 image pull gcr.io/k8s-minikube/busybox: (1.958650127s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-573330
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-573330: (12.1328449s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-573330 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-573330 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.061599192s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-573330 image list
helpers_test.go:175: Cleaning up "test-preload-573330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-573330
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-573330: (2.419859738s)
--- PASS: TestPreload (124.23s)

                                                
                                    
x
+
TestScheduledStopUnix (106.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-778473 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-778473 --memory=2048 --driver=docker  --container-runtime=containerd: (30.329939748s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778473 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-778473 -n scheduled-stop-778473
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778473 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0924 19:14:37.906847  445436 retry.go:31] will retry after 101.004µs: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.907932  445436 retry.go:31] will retry after 139.93µs: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.909059  445436 retry.go:31] will retry after 157.753µs: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.910287  445436 retry.go:31] will retry after 436.3µs: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.912198  445436 retry.go:31] will retry after 509.708µs: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.913306  445436 retry.go:31] will retry after 1.046444ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.914432  445436 retry.go:31] will retry after 1.442923ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.916623  445436 retry.go:31] will retry after 1.673946ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.918798  445436 retry.go:31] will retry after 2.842241ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.921938  445436 retry.go:31] will retry after 4.484404ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.927152  445436 retry.go:31] will retry after 5.050975ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.932456  445436 retry.go:31] will retry after 11.939063ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.944638  445436 retry.go:31] will retry after 13.765379ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.959047  445436 retry.go:31] will retry after 18.89802ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:37.978266  445436 retry.go:31] will retry after 24.678119ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
I0924 19:14:38.005136  445436 retry.go:31] will retry after 50.326734ms: open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/scheduled-stop-778473/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778473 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-778473 -n scheduled-stop-778473
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-778473
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778473 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0924 19:15:48.458854  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-778473
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-778473: exit status 7 (67.725495ms)

                                                
                                                
-- stdout --
	scheduled-stop-778473
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-778473 -n scheduled-stop-778473
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-778473 -n scheduled-stop-778473: exit status 7 (72.255891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-778473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-778473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-778473: (5.081376264s)
--- PASS: TestScheduledStopUnix (106.99s)

                                                
                                    
x
+
TestInsufficientStorage (9.88s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-061289 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-061289 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.421602223s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bf0608f0-d716-4c83-9784-0bc24be7a7b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-061289] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac0c52bb-0c1d-4fec-8fa9-1176ef963e2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19700"}}
	{"specversion":"1.0","id":"63f45121-4830-44ce-8ac5-dc4c2d66dc2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8d0ad8a3-460a-4b74-af7a-389c549f474d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig"}}
	{"specversion":"1.0","id":"ea366e52-eb5c-46d6-98f8-fe78f4a3d065","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube"}}
	{"specversion":"1.0","id":"cc4a52ec-b57f-4ec4-b36b-9a6cf9c9900a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4d27d3b2-ee58-47db-9bf0-607798afbc9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f076b8b8-59a2-4929-891c-ad6b02e17310","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c89ed25d-f581-4566-bb62-481bcb85bd4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a0a09163-72d1-42bc-b5ed-0b6df64cc379","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"62379e71-f8ad-4a1a-b24c-087b8b33c721","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"db2c0cf8-6b49-4cd9-852b-06e16c95f0a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-061289\" primary control-plane node in \"insufficient-storage-061289\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8dfe175-5acf-4b24-b210-98fe8c04750b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d6ede85-02ef-41a4-9ac2-91281ac9fa37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"196fefaf-ddea-4e43-9515-a839bde2b0b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-061289 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-061289 --output=json --layout=cluster: exit status 7 (290.65745ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-061289","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-061289","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:16:01.785944  592909 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-061289" does not appear in /home/jenkins/minikube-integration/19700-440051/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-061289 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-061289 --output=json --layout=cluster: exit status 7 (298.811228ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-061289","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-061289","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:16:02.085207  592970 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-061289" does not appear in /home/jenkins/minikube-integration/19700-440051/kubeconfig
	E0924 19:16:02.095337  592970 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/insufficient-storage-061289/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-061289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-061289
E0924 19:16:02.388657  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-061289: (1.8649007s)
--- PASS: TestInsufficientStorage (9.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (89.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2550485252 start -p running-upgrade-591913 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2550485252 start -p running-upgrade-591913 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.697500129s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-591913 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-591913 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.644529911s)
helpers_test.go:175: Cleaning up "running-upgrade-591913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-591913
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-591913: (2.586696273s)
--- PASS: TestRunningBinaryUpgrade (89.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (101.87s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-191919 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-191919 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.910959711s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-191919
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-191919: (1.368751843s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-191919 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-191919 status --format={{.Host}}: exit status 7 (142.348833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-191919 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-191919 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.850235738s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-191919 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-191919 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-191919 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (130.854172ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-191919] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-191919
	    minikube start -p kubernetes-upgrade-191919 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1919192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-191919 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-191919 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-191919 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.983998443s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-191919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-191919
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-191919: (2.361583688s)
--- PASS: TestKubernetesUpgrade (101.87s)

                                                
                                    
x
+
TestMissingContainerUpgrade (181.31s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1907940598 start -p missing-upgrade-944704 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1907940598 start -p missing-upgrade-944704 --memory=2200 --driver=docker  --container-runtime=containerd: (1m42.9612027s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-944704
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-944704: (10.272314079s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-944704
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-944704 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-944704 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.145410336s)
helpers_test.go:175: Cleaning up "missing-upgrade-944704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-944704
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-944704: (2.747867423s)
--- PASS: TestMissingContainerUpgrade (181.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-983932 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-983932 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (94.159973ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-983932] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-983932 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-983932 --driver=docker  --container-runtime=containerd: (36.752944602s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-983932 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-983932 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-983932 --no-kubernetes --driver=docker  --container-runtime=containerd: (20.597483892s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-983932 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-983932 status -o json: exit status 2 (380.974163ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-983932","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-983932
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-983932: (2.0336142s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-983932 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-983932 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.05613754s)
--- PASS: TestNoKubernetes/serial/Start (9.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-983932 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-983932 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.44238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-983932
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-983932: (1.210652249s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-983932 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-983932 --driver=docker  --container-runtime=containerd: (6.803752496s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-983932 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-983932 "sudo systemctl is-active --quiet service kubelet": exit status 1 (313.99104ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (126.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.507248241 start -p stopped-upgrade-808487 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.507248241 start -p stopped-upgrade-808487 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (49.732112629s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.507248241 -p stopped-upgrade-808487 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.507248241 -p stopped-upgrade-808487 stop: (23.56146299s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-808487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-808487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (53.171708138s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (126.47s)

                                                
                                    
x
+
TestPause/serial/Start (57.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-347221 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0924 19:20:48.458550  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:21:02.387229  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-347221 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (57.372344003s)
--- PASS: TestPause/serial/Start (57.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-808487
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-808487: (1.041797484s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.04s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-347221 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-347221 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.012336438s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.04s)

                                                
                                    
x
+
TestPause/serial/Pause (1.08s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-347221 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-347221 --alsologtostderr -v=5: (1.077659365s)
--- PASS: TestPause/serial/Pause (1.08s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-347221 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-347221 --output=json --layout=cluster: exit status 2 (469.776705ms)

                                                
                                                
-- stdout --
	{"Name":"pause-347221","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-347221","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.47s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.04s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-347221 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-347221 --alsologtostderr -v=5: (1.03881353s)
--- PASS: TestPause/serial/Unpause (1.04s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-347221 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.08s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-347221 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-347221 --alsologtostderr -v=5: (3.077068838s)
--- PASS: TestPause/serial/DeletePaused (3.08s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.87s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-347221
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-347221: exit status 1 (23.631048ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-347221: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-478627 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-478627 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (242.11135ms)

                                                
                                                
-- stdout --
	* [false-478627] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:21:52.669918  629860 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:21:52.670122  629860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:21:52.670150  629860 out.go:358] Setting ErrFile to fd 2...
	I0924 19:21:52.670169  629860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:21:52.670464  629860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-440051/.minikube/bin
	I0924 19:21:52.670946  629860 out.go:352] Setting JSON to false
	I0924 19:21:52.671905  629860 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11062,"bootTime":1727194651,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0924 19:21:52.672019  629860 start.go:139] virtualization:  
	I0924 19:21:52.675993  629860 out.go:177] * [false-478627] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 19:21:52.682861  629860 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:21:52.682917  629860 notify.go:220] Checking for updates...
	I0924 19:21:52.685512  629860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:21:52.689742  629860 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-440051/kubeconfig
	I0924 19:21:52.691556  629860 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-440051/.minikube
	I0924 19:21:52.693367  629860 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 19:21:52.695174  629860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:21:52.697823  629860 config.go:182] Loaded profile config "force-systemd-env-060599": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 19:21:52.698009  629860 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:21:52.734202  629860 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 19:21:52.734321  629860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 19:21:52.822960  629860 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:57 SystemTime:2024-09-24 19:21:52.812120777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 19:21:52.823076  629860 docker.go:318] overlay module found
	I0924 19:21:52.825523  629860 out.go:177] * Using the docker driver based on user configuration
	I0924 19:21:52.827251  629860 start.go:297] selected driver: docker
	I0924 19:21:52.827268  629860 start.go:901] validating driver "docker" against <nil>
	I0924 19:21:52.827292  629860 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:21:52.829933  629860 out.go:201] 
	W0924 19:21:52.831806  629860 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0924 19:21:52.833532  629860 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-478627 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-478627" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-478627

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-478627"

                                                
                                                
----------------------- debugLogs end: false-478627 [took: 4.927293678s] --------------------------------
helpers_test.go:175: Cleaning up "false-478627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-478627
--- PASS: TestNetworkPlugins/group/false (5.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (170.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-497730 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0924 19:24:05.455351  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-497730 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m50.805655653s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (170.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-845844 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-845844 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m27.026560149s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-497730 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9dd3216d-e086-4933-8526-37b673868ee4] Pending
helpers_test.go:344: "busybox" [9dd3216d-e086-4933-8526-37b673868ee4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0924 19:26:02.388041  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9dd3216d-e086-4933-8526-37b673868ee4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.01140857s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-497730 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-497730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-497730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.322751553s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-497730 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-497730 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-497730 --alsologtostderr -v=3: (12.438207435s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-497730 -n old-k8s-version-497730
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-497730 -n old-k8s-version-497730: exit status 7 (80.186356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-497730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (373.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-497730 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-497730 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (6m13.504054406s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-497730 -n old-k8s-version-497730
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (373.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-845844 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fa1027da-41c1-4b59-8f58-5ca97a20f112] Pending
helpers_test.go:344: "busybox" [fa1027da-41c1-4b59-8f58-5ca97a20f112] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fa1027da-41c1-4b59-8f58-5ca97a20f112] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004343629s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-845844 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-845844 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-845844 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.129745632s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-845844 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-845844 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-845844 --alsologtostderr -v=3: (12.107151308s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-845844 -n default-k8s-diff-port-845844
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-845844 -n default-k8s-diff-port-845844: exit status 7 (81.884594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-845844 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-845844 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0924 19:28:51.529525  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:48.460149  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:02.387590  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-845844 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.868642363s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-845844 -n default-k8s-diff-port-845844
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-trnxs" [5d558cf8-ae7d-442b-aa7f-27efca722dff] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004380568s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-trnxs" [5d558cf8-ae7d-442b-aa7f-27efca722dff] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004211671s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-845844 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-845844 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-845844 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-845844 -n default-k8s-diff-port-845844
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-845844 -n default-k8s-diff-port-845844: exit status 2 (355.018378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-845844 -n default-k8s-diff-port-845844
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-845844 -n default-k8s-diff-port-845844: exit status 2 (312.476872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-845844 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-845844 -n default-k8s-diff-port-845844
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-845844 -n default-k8s-diff-port-845844
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-328982 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-328982 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m32.280087822s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-252jr" [ff0f6516-25e4-4cf5-bbbe-b20f010746c1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004265252s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-252jr" [ff0f6516-25e4-4cf5-bbbe-b20f010746c1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011588295s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-497730 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-497730 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-497730 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-497730 --alsologtostderr -v=1: (1.048943772s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-497730 -n old-k8s-version-497730
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-497730 -n old-k8s-version-497730: exit status 2 (391.020402ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-497730 -n old-k8s-version-497730
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-497730 -n old-k8s-version-497730: exit status 2 (456.261033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-497730 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-497730 -n old-k8s-version-497730
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-497730 -n old-k8s-version-497730
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-893365 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-893365 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m14.721745297s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-328982 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d203fa27-ef20-4300-b4c4-0ccc46035d54] Pending
helpers_test.go:344: "busybox" [d203fa27-ef20-4300-b4c4-0ccc46035d54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d203fa27-ef20-4300-b4c4-0ccc46035d54] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004437295s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-328982 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-328982 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-328982 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.163825649s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-328982 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-328982 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-328982 --alsologtostderr -v=3: (12.209120402s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-893365 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cf6b9863-ee3f-4fde-80f3-be2e37bcb949] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cf6b9863-ee3f-4fde-80f3-be2e37bcb949] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003592424s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-893365 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-893365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-893365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.074505139s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-893365 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-328982 -n embed-certs-328982
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-328982 -n embed-certs-328982: exit status 7 (114.215905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-328982 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-328982 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-328982 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.881165067s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-328982 -n embed-certs-328982
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-893365 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-893365 --alsologtostderr -v=3: (12.301592555s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-893365 -n no-preload-893365
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-893365 -n no-preload-893365: exit status 7 (153.076031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-893365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (276.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-893365 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0924 19:35:48.458786  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:01.069289  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:01.075660  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:01.087194  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:01.108620  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:01.150049  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:01.231558  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:01.393677  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:01.715374  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:02.357360  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:02.387987  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:03.639094  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:06.201224  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:11.323071  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:21.565476  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:42.046972  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:20.963448  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:20.970108  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:20.981639  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:21.005037  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:21.046710  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:21.128349  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:21.289866  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:21.612010  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:22.253794  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:23.010088  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:23.535586  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:26.097646  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:31.220026  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:41.461488  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:38:01.943352  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:38:42.905144  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:38:44.932353  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-893365 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m36.313643172s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-893365 -n no-preload-893365
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (276.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-knsmc" [4d2f3b73-f426-421e-8c66-9b873ecfeeee] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003491098s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-knsmc" [4d2f3b73-f426-421e-8c66-9b873ecfeeee] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004191937s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-328982 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-328982 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-328982 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-328982 -n embed-certs-328982
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-328982 -n embed-certs-328982: exit status 2 (411.925864ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-328982 -n embed-certs-328982
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-328982 -n embed-certs-328982: exit status 2 (413.449145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-328982 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-328982 -n embed-certs-328982
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-328982 -n embed-certs-328982
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-299045 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-299045 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (41.070148543s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rvjfr" [cdc12a20-734a-41d1-9505-b54a68be030a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00435616s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rvjfr" [cdc12a20-734a-41d1-9505-b54a68be030a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003944509s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-893365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-893365 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-893365 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-893365 --alsologtostderr -v=1: (1.003387134s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-893365 -n no-preload-893365
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-893365 -n no-preload-893365: exit status 2 (444.921159ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-893365 -n no-preload-893365
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-893365 -n no-preload-893365: exit status 2 (405.535451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-893365 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-893365 -n no-preload-893365
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-893365 -n no-preload-893365
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m39.381709715s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-299045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-299045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.794410629s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-299045 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-299045 --alsologtostderr -v=3: (1.341954342s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-299045 -n newest-cni-299045
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-299045 -n newest-cni-299045: exit status 7 (113.978118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-299045 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (23.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-299045 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0924 19:40:04.826602  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-299045 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (23.428157133s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-299045 -n newest-cni-299045
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (23.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-299045 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-299045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-299045 -n newest-cni-299045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-299045 -n newest-cni-299045: exit status 2 (322.347129ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-299045 -n newest-cni-299045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-299045 -n newest-cni-299045: exit status 2 (354.060164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-299045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-299045 -n newest-cni-299045
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-299045 -n newest-cni-299045
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.35s)
E0924 19:45:48.458420  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:01.068794  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:02.387338  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:11.328912  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:11.335321  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:11.346810  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:11.368243  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:11.409723  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:11.491246  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:11.652742  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:11.975023  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:12.617263  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:13.899164  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:16.460856  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:21.582744  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0924 19:40:45.457617  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:48.459130  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:01.068642  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/old-k8s-version-497730/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:02.388006  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/addons-783184/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m23.955054828s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-478627 "pgrep -a kubelet"
I0924 19:41:11.069468  445436 config.go:182] Loaded profile config "auto-478627": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-478627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b4k5d" [5895b845-0299-42a8-be28-0996c5c9b2e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b4k5d" [5895b845-0299-42a8-be28-0996c5c9b2e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003743694s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-478627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m10.955596913s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vh5pt" [bf6a997b-f066-419c-88a1-0cde59d5299a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004167093s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-478627 "pgrep -a kubelet"
I0924 19:41:53.113071  445436 config.go:182] Loaded profile config "kindnet-478627": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-478627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-967tl" [dee1f07f-6843-4727-86fc-a94b82703cc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-967tl" [dee1f07f-6843-4727-86fc-a94b82703cc6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005408634s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-478627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0924 19:42:48.667903  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/default-k8s-diff-port-845844/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.069988036s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dn9zx" [cdd0e20c-2ae2-44a6-834d-6dd5ed1a0599] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005280404s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-478627 "pgrep -a kubelet"
I0924 19:42:59.660979  445436 config.go:182] Loaded profile config "calico-478627": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-478627 replace --force -f testdata/netcat-deployment.yaml
I0924 19:43:00.029121  445436 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l2vfx" [e1ad96e2-6443-4e5c-860e-b2811adeaaf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l2vfx" [e1ad96e2-6443-4e5c-860e-b2811adeaaf3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006785121s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-478627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-478627 "pgrep -a kubelet"
I0924 19:43:25.133067  445436 config.go:182] Loaded profile config "custom-flannel-478627": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-478627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w4z7h" [760d4ab5-925b-4c54-802e-981285fc3397] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-w4z7h" [760d4ab5-925b-4c54-802e-981285fc3397] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004470225s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m14.473044333s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-478627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0924 19:44:12.727356  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:12.733676  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:12.745909  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:12.767309  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:12.808600  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:12.890626  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:13.052074  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:13.374110  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:14.015726  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:15.297897  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:17.859586  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:22.981490  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:33.223428  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (52.187171607s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-478627 "pgrep -a kubelet"
I0924 19:44:48.369076  445436 config.go:182] Loaded profile config "enable-default-cni-478627": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-478627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x9kdn" [55cd466f-b288-4025-bb55-c628c4b8da9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x9kdn" [55cd466f-b288-4025-bb55-c628c4b8da9e] Running
E0924 19:44:53.705287  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/no-preload-893365/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004113029s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-v4wld" [94684426-d66b-429a-8387-c9f74efb9fa3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006723721s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-478627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-478627 "pgrep -a kubelet"
I0924 19:45:01.087285  445436 config.go:182] Loaded profile config "flannel-478627": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-478627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pr7bj" [1a813a86-c402-4fdb-b5d7-2831ec64ab35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pr7bj" [1a813a86-c402-4fdb-b5d7-2831ec64ab35] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.007816052s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-478627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0924 19:45:31.531901  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/functional-371992/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-478627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m11.310900057s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-478627 "pgrep -a kubelet"
I0924 19:46:30.686954  445436 config.go:182] Loaded profile config "bridge-478627": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-478627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nd67z" [73740ea2-f558-491d-a3fe-c32d0d98be6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0924 19:46:31.824167  445436 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/auto-478627/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nd67z" [73740ea2-f558-491d-a3fe-c32d0d98be6d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004577066s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-478627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-478627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-263463 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-263463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-263463
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-843513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-843513
--- SKIP: TestStartStop/group/disable-driver-mounts (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-478627 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-478627" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19700-440051/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Sep 2024 19:21:46 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-517492
contexts:
- context:
cluster: force-systemd-flag-517492
extensions:
- extension:
last-update: Tue, 24 Sep 2024 19:21:46 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-flag-517492
name: force-systemd-flag-517492
current-context: force-systemd-flag-517492
kind: Config
preferences: {}
users:
- name: force-systemd-flag-517492
user:
client-certificate: /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/force-systemd-flag-517492/client.crt
client-key: /home/jenkins/minikube-integration/19700-440051/.minikube/profiles/force-systemd-flag-517492/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-478627

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-478627"

                                                
                                                
----------------------- debugLogs end: kubenet-478627 [took: 4.601550169s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-478627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-478627
--- SKIP: TestNetworkPlugins/group/kubenet (4.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-478627 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-478627" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-478627

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-478627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-478627"

                                                
                                                
----------------------- debugLogs end: cilium-478627 [took: 5.312707332s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-478627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-478627
--- SKIP: TestNetworkPlugins/group/cilium (5.56s)

                                                
                                    
Copied to clipboard