Test Report: Docker_Linux_containerd_arm64 19689

                    
                      af422e057ba227eec8656c67d09f56de251f325e:2024-09-23:36336
                    
                

Test fail (1/327)

Order failed test Duration
29 TestAddons/serial/Volcano 200.11
x
+
TestAddons/serial/Volcano (200.11s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 53.599844ms
addons_test.go:843: volcano-admission stabilized in 54.432038ms
addons_test.go:835: volcano-scheduler stabilized in 54.941681ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-knktf" [624c2582-42f8-491a-ae9d-781edffdc337] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004445518s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-gs99c" [1c9bb9fc-6628-407f-be87-19f90cf5be12] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0037733s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-724bn" [f043b7ae-031a-4ced-afbb-e64a2599c107] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003347181s
addons_test.go:870: (dbg) Run:  kubectl --context addons-895903 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-895903 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-895903 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [fc66b5df-a403-4ee1-bec1-11c1c45ff82c] Pending
helpers_test.go:344: "test-job-nginx-0" [fc66b5df-a403-4ee1-bec1-11c1c45ff82c] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-895903 -n addons-895903
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-23 10:32:45.559032754 +0000 UTC m=+473.817918333
addons_test.go:902: (dbg) Run:  kubectl --context addons-895903 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-895903 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-a5b3cc79-ac3d-475f-bb39-d2229227cdbd
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z2dkf (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-z2dkf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-895903 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-895903 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-895903
helpers_test.go:235: (dbg) docker inspect addons-895903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4d7c6b023610b09dcdd899a5e72c5b691f34eb04ad3c80a82e9b18ab772437c7",
	        "Created": "2024-09-23T10:25:31.536953277Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2614295,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T10:25:31.678814399Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/4d7c6b023610b09dcdd899a5e72c5b691f34eb04ad3c80a82e9b18ab772437c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4d7c6b023610b09dcdd899a5e72c5b691f34eb04ad3c80a82e9b18ab772437c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/4d7c6b023610b09dcdd899a5e72c5b691f34eb04ad3c80a82e9b18ab772437c7/hosts",
	        "LogPath": "/var/lib/docker/containers/4d7c6b023610b09dcdd899a5e72c5b691f34eb04ad3c80a82e9b18ab772437c7/4d7c6b023610b09dcdd899a5e72c5b691f34eb04ad3c80a82e9b18ab772437c7-json.log",
	        "Name": "/addons-895903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-895903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-895903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/08fba6eaa099cf49c775b34319663b148eb86df6e084507181455dbd7c699a7d-init/diff:/var/lib/docker/overlay2/3f48eb309b414f1318a2f8c59316f40e6520b9d7d492e0795786c9b63367452d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08fba6eaa099cf49c775b34319663b148eb86df6e084507181455dbd7c699a7d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08fba6eaa099cf49c775b34319663b148eb86df6e084507181455dbd7c699a7d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08fba6eaa099cf49c775b34319663b148eb86df6e084507181455dbd7c699a7d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-895903",
	                "Source": "/var/lib/docker/volumes/addons-895903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-895903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-895903",
	                "name.minikube.sigs.k8s.io": "addons-895903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7855619e49bff79b2c9bf4ba40ca6602c3337ccae8c5e425338f7d45e968b379",
	            "SandboxKey": "/var/run/docker/netns/7855619e49bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41421"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41422"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41425"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41423"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41424"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-895903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0b49d225d0f0f97400a97c62672b238e2d4ebfd01f050c880dacf5466c522167",
	                    "EndpointID": "ad8d62fb49c697e2bbf3a723e08fa40412599e4af81293aadbd689c78fd698b0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-895903",
	                        "4d7c6b023610"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-895903 -n addons-895903
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-895903 logs -n 25: (1.599742925s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-179078   | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC |                     |
	|         | -p download-only-179078              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC | 23 Sep 24 10:24 UTC |
	| delete  | -p download-only-179078              | download-only-179078   | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC | 23 Sep 24 10:24 UTC |
	| start   | -o=json --download-only              | download-only-676157   | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC |                     |
	|         | -p download-only-676157              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC | 23 Sep 24 10:25 UTC |
	| delete  | -p download-only-676157              | download-only-676157   | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC | 23 Sep 24 10:25 UTC |
	| delete  | -p download-only-179078              | download-only-179078   | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC | 23 Sep 24 10:25 UTC |
	| delete  | -p download-only-676157              | download-only-676157   | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC | 23 Sep 24 10:25 UTC |
	| start   | --download-only -p                   | download-docker-775785 | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC |                     |
	|         | download-docker-775785               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-775785            | download-docker-775785 | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC | 23 Sep 24 10:25 UTC |
	| start   | --download-only -p                   | binary-mirror-598077   | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC |                     |
	|         | binary-mirror-598077                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45953               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-598077              | binary-mirror-598077   | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC | 23 Sep 24 10:25 UTC |
	| addons  | enable dashboard -p                  | addons-895903          | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC |                     |
	|         | addons-895903                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-895903          | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC |                     |
	|         | addons-895903                        |                        |         |         |                     |                     |
	| start   | -p addons-895903 --wait=true         | addons-895903          | jenkins | v1.34.0 | 23 Sep 24 10:25 UTC | 23 Sep 24 10:29 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:25:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:25:07.641380 2613811 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:25:07.641528 2613811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:25:07.641553 2613811 out.go:358] Setting ErrFile to fd 2...
	I0923 10:25:07.641571 2613811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:25:07.641842 2613811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
	I0923 10:25:07.642348 2613811 out.go:352] Setting JSON to false
	I0923 10:25:07.643335 2613811 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":151655,"bootTime":1726935453,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 10:25:07.643409 2613811 start.go:139] virtualization:  
	I0923 10:25:07.646249 2613811 out.go:177] * [addons-895903] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 10:25:07.648110 2613811 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:25:07.648139 2613811 notify.go:220] Checking for updates...
	I0923 10:25:07.652182 2613811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:25:07.653981 2613811 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	I0923 10:25:07.656435 2613811 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	I0923 10:25:07.658248 2613811 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 10:25:07.659939 2613811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:25:07.661985 2613811 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:25:07.687965 2613811 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:25:07.688102 2613811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:25:07.744937 2613811 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 10:25:07.735240578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:25:07.745052 2613811 docker.go:318] overlay module found
	I0923 10:25:07.747057 2613811 out.go:177] * Using the docker driver based on user configuration
	I0923 10:25:07.748929 2613811 start.go:297] selected driver: docker
	I0923 10:25:07.748946 2613811 start.go:901] validating driver "docker" against <nil>
	I0923 10:25:07.748960 2613811 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:25:07.749600 2613811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:25:07.799161 2613811 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 10:25:07.790422927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:25:07.799385 2613811 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:25:07.799621 2613811 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:25:07.801741 2613811 out.go:177] * Using Docker driver with root privileges
	I0923 10:25:07.803860 2613811 cni.go:84] Creating CNI manager for ""
	I0923 10:25:07.803913 2613811 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 10:25:07.803928 2613811 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:25:07.804009 2613811 start.go:340] cluster config:
	{Name:addons-895903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-895903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:25:07.805931 2613811 out.go:177] * Starting "addons-895903" primary control-plane node in "addons-895903" cluster
	I0923 10:25:07.807666 2613811 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 10:25:07.809562 2613811 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:25:07.811335 2613811 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 10:25:07.811390 2613811 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-2607666/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 10:25:07.811402 2613811 cache.go:56] Caching tarball of preloaded images
	I0923 10:25:07.811426 2613811 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:25:07.811514 2613811 preload.go:172] Found /home/jenkins/minikube-integration/19689-2607666/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 10:25:07.811525 2613811 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0923 10:25:07.811871 2613811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/config.json ...
	I0923 10:25:07.811900 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/config.json: {Name:mkbac1b4d1bedb57f9296b37e514f282d3b7bdfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:07.826150 2613811 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:25:07.826253 2613811 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:25:07.826272 2613811 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 10:25:07.826276 2613811 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 10:25:07.826283 2613811 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 10:25:07.826289 2613811 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 10:25:24.989285 2613811 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 10:25:24.989325 2613811 cache.go:194] Successfully downloaded all kic artifacts
	I0923 10:25:24.989356 2613811 start.go:360] acquireMachinesLock for addons-895903: {Name:mk4b9ad88333f6a0cec9f2ddaa794eedc9f368e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:25:24.990018 2613811 start.go:364] duration metric: took 641.088µs to acquireMachinesLock for "addons-895903"
	I0923 10:25:24.990072 2613811 start.go:93] Provisioning new machine with config: &{Name:addons-895903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-895903 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 10:25:24.990166 2613811 start.go:125] createHost starting for "" (driver="docker")
	I0923 10:25:24.993641 2613811 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 10:25:24.993964 2613811 start.go:159] libmachine.API.Create for "addons-895903" (driver="docker")
	I0923 10:25:24.994002 2613811 client.go:168] LocalClient.Create starting
	I0923 10:25:24.994154 2613811 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19689-2607666/.minikube/certs/ca.pem
	I0923 10:25:25.917807 2613811 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19689-2607666/.minikube/certs/cert.pem
	I0923 10:25:26.212645 2613811 cli_runner.go:164] Run: docker network inspect addons-895903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 10:25:26.229433 2613811 cli_runner.go:211] docker network inspect addons-895903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 10:25:26.229526 2613811 network_create.go:284] running [docker network inspect addons-895903] to gather additional debugging logs...
	I0923 10:25:26.229550 2613811 cli_runner.go:164] Run: docker network inspect addons-895903
	W0923 10:25:26.245112 2613811 cli_runner.go:211] docker network inspect addons-895903 returned with exit code 1
	I0923 10:25:26.245150 2613811 network_create.go:287] error running [docker network inspect addons-895903]: docker network inspect addons-895903: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-895903 not found
	I0923 10:25:26.245165 2613811 network_create.go:289] output of [docker network inspect addons-895903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-895903 not found
	
	** /stderr **
	I0923 10:25:26.245271 2613811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:25:26.260458 2613811 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400063b920}
	I0923 10:25:26.260498 2613811 network_create.go:124] attempt to create docker network addons-895903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 10:25:26.260560 2613811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-895903 addons-895903
	I0923 10:25:26.332617 2613811 network_create.go:108] docker network addons-895903 192.168.49.0/24 created
	I0923 10:25:26.332649 2613811 kic.go:121] calculated static IP "192.168.49.2" for the "addons-895903" container
	I0923 10:25:26.332724 2613811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 10:25:26.346286 2613811 cli_runner.go:164] Run: docker volume create addons-895903 --label name.minikube.sigs.k8s.io=addons-895903 --label created_by.minikube.sigs.k8s.io=true
	I0923 10:25:26.363592 2613811 oci.go:103] Successfully created a docker volume addons-895903
	I0923 10:25:26.363695 2613811 cli_runner.go:164] Run: docker run --rm --name addons-895903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-895903 --entrypoint /usr/bin/test -v addons-895903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 10:25:27.530609 2613811 cli_runner.go:217] Completed: docker run --rm --name addons-895903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-895903 --entrypoint /usr/bin/test -v addons-895903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (1.166872958s)
	I0923 10:25:27.530650 2613811 oci.go:107] Successfully prepared a docker volume addons-895903
	I0923 10:25:27.530675 2613811 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 10:25:27.530695 2613811 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 10:25:27.530768 2613811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-2607666/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-895903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 10:25:31.466242 2613811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-2607666/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-895903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.935426243s)
	I0923 10:25:31.466275 2613811 kic.go:203] duration metric: took 3.935576962s to extract preloaded images to volume ...
	W0923 10:25:31.466410 2613811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 10:25:31.466515 2613811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 10:25:31.522771 2613811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-895903 --name addons-895903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-895903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-895903 --network addons-895903 --ip 192.168.49.2 --volume addons-895903:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 10:25:31.838228 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Running}}
	I0923 10:25:31.864048 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:31.890389 2613811 cli_runner.go:164] Run: docker exec addons-895903 stat /var/lib/dpkg/alternatives/iptables
	I0923 10:25:31.951053 2613811 oci.go:144] the created container "addons-895903" has a running status.
	I0923 10:25:31.951083 2613811 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa...
	I0923 10:25:32.347386 2613811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 10:25:32.371492 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:32.402892 2613811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 10:25:32.402916 2613811 kic_runner.go:114] Args: [docker exec --privileged addons-895903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 10:25:32.499526 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:32.519717 2613811 machine.go:93] provisionDockerMachine start ...
	I0923 10:25:32.519822 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:32.539739 2613811 main.go:141] libmachine: Using SSH client type: native
	I0923 10:25:32.540003 2613811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41421 <nil> <nil>}
	I0923 10:25:32.540020 2613811 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 10:25:32.719039 2613811 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-895903
	
	I0923 10:25:32.719065 2613811 ubuntu.go:169] provisioning hostname "addons-895903"
	I0923 10:25:32.719144 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:32.740947 2613811 main.go:141] libmachine: Using SSH client type: native
	I0923 10:25:32.741194 2613811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41421 <nil> <nil>}
	I0923 10:25:32.741211 2613811 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-895903 && echo "addons-895903" | sudo tee /etc/hostname
	I0923 10:25:32.897700 2613811 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-895903
	
	I0923 10:25:32.897819 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:32.915447 2613811 main.go:141] libmachine: Using SSH client type: native
	I0923 10:25:32.915694 2613811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41421 <nil> <nil>}
	I0923 10:25:32.915716 2613811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-895903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-895903/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-895903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:25:33.055651 2613811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:25:33.055680 2613811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19689-2607666/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-2607666/.minikube}
	I0923 10:25:33.055703 2613811 ubuntu.go:177] setting up certificates
	I0923 10:25:33.055713 2613811 provision.go:84] configureAuth start
	I0923 10:25:33.055782 2613811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-895903
	I0923 10:25:33.074585 2613811 provision.go:143] copyHostCerts
	I0923 10:25:33.074671 2613811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-2607666/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-2607666/.minikube/ca.pem (1078 bytes)
	I0923 10:25:33.074807 2613811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-2607666/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-2607666/.minikube/cert.pem (1123 bytes)
	I0923 10:25:33.074881 2613811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-2607666/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-2607666/.minikube/key.pem (1675 bytes)
	I0923 10:25:33.075000 2613811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-2607666/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-2607666/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-2607666/.minikube/certs/ca-key.pem org=jenkins.addons-895903 san=[127.0.0.1 192.168.49.2 addons-895903 localhost minikube]
	I0923 10:25:33.418516 2613811 provision.go:177] copyRemoteCerts
	I0923 10:25:33.418594 2613811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:25:33.418647 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:33.438490 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:33.532063 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:25:33.556795 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:25:33.581785 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:25:33.606426 2613811 provision.go:87] duration metric: took 550.698501ms to configureAuth
	I0923 10:25:33.606462 2613811 ubuntu.go:193] setting minikube options for container-runtime
	I0923 10:25:33.606659 2613811 config.go:182] Loaded profile config "addons-895903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 10:25:33.606672 2613811 machine.go:96] duration metric: took 1.086930996s to provisionDockerMachine
	I0923 10:25:33.606684 2613811 client.go:171] duration metric: took 8.612665703s to LocalClient.Create
	I0923 10:25:33.606707 2613811 start.go:167] duration metric: took 8.612745185s to libmachine.API.Create "addons-895903"
	I0923 10:25:33.606720 2613811 start.go:293] postStartSetup for "addons-895903" (driver="docker")
	I0923 10:25:33.606730 2613811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:25:33.606795 2613811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:25:33.606841 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:33.623504 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:33.720808 2613811 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:25:33.724086 2613811 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:25:33.724121 2613811 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:25:33.724132 2613811 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:25:33.724139 2613811 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 10:25:33.724149 2613811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-2607666/.minikube/addons for local assets ...
	I0923 10:25:33.724218 2613811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-2607666/.minikube/files for local assets ...
	I0923 10:25:33.724242 2613811 start.go:296] duration metric: took 117.515561ms for postStartSetup
	I0923 10:25:33.724562 2613811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-895903
	I0923 10:25:33.740614 2613811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/config.json ...
	I0923 10:25:33.740909 2613811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:25:33.740960 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:33.758935 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:33.852270 2613811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 10:25:33.856701 2613811 start.go:128] duration metric: took 8.866517068s to createHost
	I0923 10:25:33.856727 2613811 start.go:83] releasing machines lock for "addons-895903", held for 8.866690787s
	I0923 10:25:33.856796 2613811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-895903
	I0923 10:25:33.872710 2613811 ssh_runner.go:195] Run: cat /version.json
	I0923 10:25:33.872764 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:33.872769 2613811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:25:33.872859 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:33.892568 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:33.893093 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:34.110401 2613811 ssh_runner.go:195] Run: systemctl --version
	I0923 10:25:34.114812 2613811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:25:34.119249 2613811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 10:25:34.144017 2613811 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 10:25:34.144104 2613811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:25:34.173126 2613811 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 10:25:34.173197 2613811 start.go:495] detecting cgroup driver to use...
	I0923 10:25:34.173271 2613811 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:25:34.173344 2613811 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 10:25:34.186458 2613811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 10:25:34.198487 2613811 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:25:34.198555 2613811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:25:34.212494 2613811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:25:34.227685 2613811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:25:34.319379 2613811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:25:34.411854 2613811 docker.go:233] disabling docker service ...
	I0923 10:25:34.411945 2613811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:25:34.431496 2613811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:25:34.443484 2613811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:25:34.536190 2613811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:25:34.622813 2613811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:25:34.634104 2613811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:25:34.650772 2613811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 10:25:34.660746 2613811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 10:25:34.670287 2613811 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 10:25:34.670382 2613811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 10:25:34.680259 2613811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:25:34.691646 2613811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 10:25:34.701535 2613811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:25:34.711514 2613811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:25:34.721080 2613811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 10:25:34.731003 2613811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 10:25:34.741157 2613811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 10:25:34.751427 2613811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:25:34.760297 2613811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:25:34.773031 2613811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:25:34.852738 2613811 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 10:25:34.988702 2613811 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 10:25:34.988817 2613811 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 10:25:34.992837 2613811 start.go:563] Will wait 60s for crictl version
	I0923 10:25:34.992999 2613811 ssh_runner.go:195] Run: which crictl
	I0923 10:25:34.996983 2613811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:25:35.038852 2613811 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 10:25:35.039014 2613811 ssh_runner.go:195] Run: containerd --version
	I0923 10:25:35.061854 2613811 ssh_runner.go:195] Run: containerd --version
	I0923 10:25:35.089980 2613811 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0923 10:25:35.092270 2613811 cli_runner.go:164] Run: docker network inspect addons-895903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:25:35.108013 2613811 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 10:25:35.112027 2613811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:25:35.124280 2613811 kubeadm.go:883] updating cluster {Name:addons-895903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-895903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:25:35.124412 2613811 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 10:25:35.124475 2613811 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:25:35.162474 2613811 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 10:25:35.162501 2613811 containerd.go:534] Images already preloaded, skipping extraction
	I0923 10:25:35.162568 2613811 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:25:35.198861 2613811 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 10:25:35.198885 2613811 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:25:35.198894 2613811 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0923 10:25:35.199048 2613811 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-895903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-895903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:25:35.199136 2613811 ssh_runner.go:195] Run: sudo crictl info
	I0923 10:25:35.236480 2613811 cni.go:84] Creating CNI manager for ""
	I0923 10:25:35.236506 2613811 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 10:25:35.236517 2613811 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:25:35.236541 2613811 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-895903 NodeName:addons-895903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:25:35.236677 2613811 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-895903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:25:35.236753 2613811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:25:35.245618 2613811 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:25:35.245695 2613811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:25:35.254499 2613811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0923 10:25:35.273431 2613811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:25:35.292891 2613811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0923 10:25:35.311057 2613811 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 10:25:35.314627 2613811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:25:35.325509 2613811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:25:35.404412 2613811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:25:35.423842 2613811 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903 for IP: 192.168.49.2
	I0923 10:25:35.423865 2613811 certs.go:194] generating shared ca certs ...
	I0923 10:25:35.423882 2613811 certs.go:226] acquiring lock for ca certs: {Name:mkc2661b18cd20ef76bea48b3ea09fb1e1611036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:35.424070 2613811 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-2607666/.minikube/ca.key
	I0923 10:25:35.664657 2613811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2607666/.minikube/ca.crt ...
	I0923 10:25:35.664690 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/ca.crt: {Name:mk2acbf498be7eb9fd1e609e34971b58d80166f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:35.664885 2613811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2607666/.minikube/ca.key ...
	I0923 10:25:35.664898 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/ca.key: {Name:mkc1870c5bffa1df2e3213758813cdea7eb8bc40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:35.665564 2613811 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-2607666/.minikube/proxy-client-ca.key
	I0923 10:25:35.995171 2613811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2607666/.minikube/proxy-client-ca.crt ...
	I0923 10:25:35.995204 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/proxy-client-ca.crt: {Name:mk191d04dcb1c323d55eabd7e3e9677c7c6b6bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:35.995384 2613811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2607666/.minikube/proxy-client-ca.key ...
	I0923 10:25:35.995400 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/proxy-client-ca.key: {Name:mk392e746be6cf3ba3a56f779014273f417f1804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:35.995488 2613811 certs.go:256] generating profile certs ...
	I0923 10:25:35.995548 2613811 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.key
	I0923 10:25:35.995574 2613811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt with IP's: []
	I0923 10:25:36.550698 2613811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt ...
	I0923 10:25:36.550730 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: {Name:mk52c061011b0f3aa24257ac1188a8de3a8bf524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:36.551473 2613811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.key ...
	I0923 10:25:36.551489 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.key: {Name:mk3799d794cd5493d7e5501e630f6433f2791b62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:36.552113 2613811 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.key.bd897abb
	I0923 10:25:36.552136 2613811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.crt.bd897abb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 10:25:36.802373 2613811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.crt.bd897abb ...
	I0923 10:25:36.802406 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.crt.bd897abb: {Name:mk7734653c52dbd2068e79485fc03fd9216a459b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:36.803090 2613811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.key.bd897abb ...
	I0923 10:25:36.803108 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.key.bd897abb: {Name:mk34d87f9db1092f46c9fc08e3987627392ea56e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:36.803719 2613811 certs.go:381] copying /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.crt.bd897abb -> /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.crt
	I0923 10:25:36.803805 2613811 certs.go:385] copying /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.key.bd897abb -> /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.key
	I0923 10:25:36.803864 2613811 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/proxy-client.key
	I0923 10:25:36.803886 2613811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/proxy-client.crt with IP's: []
	I0923 10:25:37.283113 2613811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/proxy-client.crt ...
	I0923 10:25:37.283144 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/proxy-client.crt: {Name:mk013b16236a46e80a5f1c0420e9b88ba7e38434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:37.283869 2613811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/proxy-client.key ...
	I0923 10:25:37.283897 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/proxy-client.key: {Name:mke6d167f82f0a627b86b3ecea1890ba1a4034b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:37.284096 2613811 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2607666/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 10:25:37.284137 2613811 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2607666/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:25:37.284167 2613811 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2607666/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:25:37.284200 2613811 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2607666/.minikube/certs/key.pem (1675 bytes)
	I0923 10:25:37.284799 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:25:37.309859 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 10:25:37.335235 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:25:37.359458 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:25:37.383419 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 10:25:37.407692 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 10:25:37.432347 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:25:37.456880 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 10:25:37.481472 2613811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2607666/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:25:37.505748 2613811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:25:37.523216 2613811 ssh_runner.go:195] Run: openssl version
	I0923 10:25:37.528881 2613811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:25:37.538595 2613811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:25:37.542091 2613811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:25:37.542182 2613811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:25:37.549210 2613811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:25:37.558564 2613811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:25:37.561752 2613811 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:25:37.561801 2613811 kubeadm.go:392] StartCluster: {Name:addons-895903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-895903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:25:37.561878 2613811 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 10:25:37.561937 2613811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 10:25:37.603113 2613811 cri.go:89] found id: ""
	I0923 10:25:37.603198 2613811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:25:37.615140 2613811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:25:37.624648 2613811 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 10:25:37.624716 2613811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:25:37.635647 2613811 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:25:37.635669 2613811 kubeadm.go:157] found existing configuration files:
	
	I0923 10:25:37.635725 2613811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:25:37.645813 2613811 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:25:37.645880 2613811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:25:37.655461 2613811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:25:37.665037 2613811 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:25:37.665104 2613811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:25:37.675230 2613811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:25:37.684396 2613811 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:25:37.684487 2613811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:25:37.695963 2613811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:25:37.705038 2613811 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:25:37.705107 2613811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:25:37.713929 2613811 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 10:25:37.751931 2613811 kubeadm.go:310] W0923 10:25:37.751208    1024 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:25:37.753123 2613811 kubeadm.go:310] W0923 10:25:37.752622    1024 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:25:37.775254 2613811 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 10:25:37.841707 2613811 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:25:52.504769 2613811 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:25:52.504825 2613811 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:25:52.504912 2613811 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 10:25:52.504967 2613811 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 10:25:52.505005 2613811 kubeadm.go:310] OS: Linux
	I0923 10:25:52.505050 2613811 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 10:25:52.505106 2613811 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 10:25:52.505153 2613811 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 10:25:52.505201 2613811 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 10:25:52.505249 2613811 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 10:25:52.505298 2613811 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 10:25:52.505343 2613811 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 10:25:52.505391 2613811 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 10:25:52.505437 2613811 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 10:25:52.505508 2613811 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:25:52.505600 2613811 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:25:52.505689 2613811 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:25:52.505750 2613811 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:25:52.508224 2613811 out.go:235]   - Generating certificates and keys ...
	I0923 10:25:52.508332 2613811 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:25:52.508404 2613811 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:25:52.508477 2613811 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:25:52.508550 2613811 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:25:52.508614 2613811 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:25:52.508669 2613811 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:25:52.508728 2613811 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:25:52.508857 2613811 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-895903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:25:52.508925 2613811 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:25:52.509053 2613811 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-895903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:25:52.509122 2613811 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:25:52.509195 2613811 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:25:52.509244 2613811 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:25:52.509304 2613811 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:25:52.509359 2613811 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:25:52.509430 2613811 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:25:52.509491 2613811 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:25:52.509558 2613811 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:25:52.509620 2613811 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:25:52.509707 2613811 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:25:52.509777 2613811 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:25:52.512097 2613811 out.go:235]   - Booting up control plane ...
	I0923 10:25:52.512246 2613811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:25:52.512344 2613811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:25:52.512420 2613811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:25:52.512536 2613811 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:25:52.512631 2613811 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:25:52.512678 2613811 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:25:52.512826 2613811 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:25:52.512942 2613811 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:25:52.513011 2613811 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.508131221s
	I0923 10:25:52.513090 2613811 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:25:52.513155 2613811 kubeadm.go:310] [api-check] The API server is healthy after 6.002077012s
	I0923 10:25:52.513303 2613811 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:25:52.513449 2613811 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:25:52.513520 2613811 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:25:52.513716 2613811 kubeadm.go:310] [mark-control-plane] Marking the node addons-895903 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:25:52.513780 2613811 kubeadm.go:310] [bootstrap-token] Using token: 4zg0rz.2ebwvuelhji42w71
	I0923 10:25:52.516244 2613811 out.go:235]   - Configuring RBAC rules ...
	I0923 10:25:52.516430 2613811 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:25:52.516553 2613811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:25:52.516744 2613811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:25:52.516931 2613811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:25:52.517083 2613811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:25:52.517205 2613811 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:25:52.517365 2613811 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:25:52.517436 2613811 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:25:52.517514 2613811 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:25:52.517526 2613811 kubeadm.go:310] 
	I0923 10:25:52.517631 2613811 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:25:52.517642 2613811 kubeadm.go:310] 
	I0923 10:25:52.517760 2613811 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:25:52.517770 2613811 kubeadm.go:310] 
	I0923 10:25:52.517810 2613811 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:25:52.517891 2613811 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:25:52.517956 2613811 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:25:52.517969 2613811 kubeadm.go:310] 
	I0923 10:25:52.518042 2613811 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:25:52.518056 2613811 kubeadm.go:310] 
	I0923 10:25:52.518128 2613811 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:25:52.518137 2613811 kubeadm.go:310] 
	I0923 10:25:52.518215 2613811 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:25:52.518324 2613811 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:25:52.518424 2613811 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:25:52.518432 2613811 kubeadm.go:310] 
	I0923 10:25:52.518576 2613811 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:25:52.518683 2613811 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:25:52.518693 2613811 kubeadm.go:310] 
	I0923 10:25:52.518811 2613811 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4zg0rz.2ebwvuelhji42w71 \
	I0923 10:25:52.518972 2613811 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:deeeddfc0a4999368721e63570c2dde69fb165cf439b6cc1b8195d35c4b9585a \
	I0923 10:25:52.519001 2613811 kubeadm.go:310] 	--control-plane 
	I0923 10:25:52.519009 2613811 kubeadm.go:310] 
	I0923 10:25:52.519125 2613811 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:25:52.519136 2613811 kubeadm.go:310] 
	I0923 10:25:52.519254 2613811 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4zg0rz.2ebwvuelhji42w71 \
	I0923 10:25:52.519647 2613811 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:deeeddfc0a4999368721e63570c2dde69fb165cf439b6cc1b8195d35c4b9585a 
	I0923 10:25:52.519672 2613811 cni.go:84] Creating CNI manager for ""
	I0923 10:25:52.519682 2613811 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 10:25:52.522161 2613811 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 10:25:52.524182 2613811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 10:25:52.528636 2613811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 10:25:52.528665 2613811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 10:25:52.552625 2613811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 10:25:52.841728 2613811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:25:52.841860 2613811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:25:52.841946 2613811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-895903 minikube.k8s.io/updated_at=2024_09_23T10_25_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-895903 minikube.k8s.io/primary=true
	I0923 10:25:53.055771 2613811 ops.go:34] apiserver oom_adj: -16
	I0923 10:25:53.055920 2613811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:25:53.556732 2613811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:25:54.055988 2613811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:25:54.556662 2613811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:25:55.057047 2613811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:25:55.556051 2613811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:25:56.056463 2613811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:25:56.160907 2613811 kubeadm.go:1113] duration metric: took 3.319094577s to wait for elevateKubeSystemPrivileges
	I0923 10:25:56.160953 2613811 kubeadm.go:394] duration metric: took 18.599155453s to StartCluster
	I0923 10:25:56.160973 2613811 settings.go:142] acquiring lock: {Name:mk1c6f0c92ddda690ace497add83b1e7b2d81202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:56.161105 2613811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-2607666/kubeconfig
	I0923 10:25:56.161506 2613811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2607666/kubeconfig: {Name:mkc42fb93f23df2d68ce1d16de1a9365e31d0501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:25:56.161719 2613811 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 10:25:56.161862 2613811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:25:56.162133 2613811 config.go:182] Loaded profile config "addons-895903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 10:25:56.162181 2613811 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:25:56.162260 2613811 addons.go:69] Setting yakd=true in profile "addons-895903"
	I0923 10:25:56.162277 2613811 addons.go:234] Setting addon yakd=true in "addons-895903"
	I0923 10:25:56.162301 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.162809 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.163363 2613811 addons.go:69] Setting inspektor-gadget=true in profile "addons-895903"
	I0923 10:25:56.163389 2613811 addons.go:234] Setting addon inspektor-gadget=true in "addons-895903"
	I0923 10:25:56.163425 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.163855 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.164325 2613811 addons.go:69] Setting cloud-spanner=true in profile "addons-895903"
	I0923 10:25:56.164349 2613811 addons.go:234] Setting addon cloud-spanner=true in "addons-895903"
	I0923 10:25:56.164379 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.164794 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.166348 2613811 addons.go:69] Setting metrics-server=true in profile "addons-895903"
	I0923 10:25:56.166612 2613811 addons.go:234] Setting addon metrics-server=true in "addons-895903"
	I0923 10:25:56.166666 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.167043 2613811 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-895903"
	I0923 10:25:56.167097 2613811 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-895903"
	I0923 10:25:56.167125 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.169094 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.166516 2613811 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-895903"
	I0923 10:25:56.169852 2613811 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-895903"
	I0923 10:25:56.169919 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.166529 2613811 addons.go:69] Setting registry=true in profile "addons-895903"
	I0923 10:25:56.170546 2613811 addons.go:234] Setting addon registry=true in "addons-895903"
	I0923 10:25:56.170599 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.171127 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.179437 2613811 addons.go:69] Setting default-storageclass=true in profile "addons-895903"
	I0923 10:25:56.179518 2613811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-895903"
	I0923 10:25:56.179867 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.166536 2613811 addons.go:69] Setting storage-provisioner=true in profile "addons-895903"
	I0923 10:25:56.180182 2613811 addons.go:234] Setting addon storage-provisioner=true in "addons-895903"
	I0923 10:25:56.180220 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.180628 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.166542 2613811 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-895903"
	I0923 10:25:56.186186 2613811 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-895903"
	I0923 10:25:56.186516 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.196030 2613811 addons.go:69] Setting gcp-auth=true in profile "addons-895903"
	I0923 10:25:56.196117 2613811 mustload.go:65] Loading cluster: addons-895903
	I0923 10:25:56.196363 2613811 config.go:182] Loaded profile config "addons-895903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 10:25:56.196687 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.166548 2613811 addons.go:69] Setting volcano=true in profile "addons-895903"
	I0923 10:25:56.199478 2613811 addons.go:234] Setting addon volcano=true in "addons-895903"
	I0923 10:25:56.199526 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.166556 2613811 addons.go:69] Setting volumesnapshots=true in profile "addons-895903"
	I0923 10:25:56.200399 2613811 out.go:177] * Verifying Kubernetes components...
	I0923 10:25:56.200651 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.213221 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.217897 2613811 addons.go:69] Setting ingress=true in profile "addons-895903"
	I0923 10:25:56.217971 2613811 addons.go:234] Setting addon ingress=true in "addons-895903"
	I0923 10:25:56.219832 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.220389 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.228740 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.230707 2613811 addons.go:69] Setting ingress-dns=true in profile "addons-895903"
	I0923 10:25:56.230764 2613811 addons.go:234] Setting addon ingress-dns=true in "addons-895903"
	I0923 10:25:56.230832 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.231505 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.233035 2613811 addons.go:234] Setting addon volumesnapshots=true in "addons-895903"
	I0923 10:25:56.233101 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.235148 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.266669 2613811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:25:56.280702 2613811 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:25:56.282546 2613811 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:25:56.282575 2613811 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:25:56.282647 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.322003 2613811 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:25:56.326863 2613811 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:25:56.330229 2613811 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:25:56.330256 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:25:56.330322 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.341798 2613811 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:25:56.344063 2613811 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:25:56.348743 2613811 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:25:56.348768 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:25:56.348842 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.362847 2613811 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:25:56.367165 2613811 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:25:56.367196 2613811 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:25:56.367324 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.412993 2613811 addons.go:234] Setting addon default-storageclass=true in "addons-895903"
	I0923 10:25:56.413037 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.413464 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.418198 2613811 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-895903"
	I0923 10:25:56.418235 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.418672 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:25:56.431034 2613811 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:25:56.435438 2613811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:25:56.436680 2613811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:25:56.442345 2613811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:25:56.442481 2613811 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:25:56.445419 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:25:56.454866 2613811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:25:56.454998 2613811 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:25:56.455016 2613811 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:25:56.455103 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.465664 2613811 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:25:56.465817 2613811 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:25:56.471364 2613811 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:25:56.471682 2613811 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:25:56.471711 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:25:56.471797 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.475496 2613811 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:25:56.475520 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:25:56.475587 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.499362 2613811 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:25:56.501602 2613811 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:25:56.501629 2613811 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:25:56.501700 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.506492 2613811 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:25:56.514962 2613811 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 10:25:56.519764 2613811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:25:56.520020 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.522163 2613811 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 10:25:56.522460 2613811 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:25:56.522475 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 10:25:56.522538 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.529297 2613811 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 10:25:56.529735 2613811 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:25:56.529751 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 10:25:56.529813 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.545935 2613811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:25:56.551388 2613811 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 10:25:56.556230 2613811 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 10:25:56.559248 2613811 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:25:56.559274 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 10:25:56.562409 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.573551 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.573910 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.576823 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.578177 2613811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:25:56.583026 2613811 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:25:56.583059 2613811 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:25:56.583122 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.583543 2613811 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:25:56.589557 2613811 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:25:56.591512 2613811 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:25:56.591534 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:25:56.591604 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.646924 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.677886 2613811 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:25:56.677908 2613811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:25:56.678112 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:25:56.678515 2613811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:25:56.695447 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.702469 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.705169 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.717200 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.724924 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.761050 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.766594 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	W0923 10:25:56.768690 2613811 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 10:25:56.768716 2613811 retry.go:31] will retry after 339.286565ms: ssh: handshake failed: EOF
	I0923 10:25:56.772325 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:56.793444 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:25:57.206129 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:25:57.231628 2613811 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:25:57.231700 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:25:57.235340 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:25:57.282384 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:25:57.385416 2613811 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:25:57.385496 2613811 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:25:57.400083 2613811 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:25:57.400147 2613811 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:25:57.451487 2613811 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:25:57.451510 2613811 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:25:57.602008 2613811 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:25:57.602088 2613811 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:25:57.741771 2613811 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:25:57.741850 2613811 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:25:57.758051 2613811 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:25:57.758081 2613811 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:25:57.806090 2613811 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:25:57.806115 2613811 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:25:57.856728 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:25:57.873402 2613811 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:25:57.873428 2613811 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:25:57.875781 2613811 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:25:57.875803 2613811 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:25:57.912042 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:25:57.936303 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:25:57.960885 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:25:57.995320 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:25:57.999779 2613811 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:25:57.999804 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:25:58.092014 2613811 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:25:58.092039 2613811 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:25:58.104282 2613811 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:25:58.104308 2613811 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:25:58.118896 2613811 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:25:58.118924 2613811 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:25:58.180700 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:25:58.298352 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:25:58.301350 2613811 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:25:58.301381 2613811 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:25:58.482488 2613811 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:25:58.482514 2613811 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:25:58.491676 2613811 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:25:58.491703 2613811 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:25:58.587946 2613811 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:25:58.587973 2613811 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:25:58.632344 2613811 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.195626135s)
	I0923 10:25:58.632374 2613811 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 10:25:58.633395 2613811 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.954856606s)
	I0923 10:25:58.634085 2613811 node_ready.go:35] waiting up to 6m0s for node "addons-895903" to be "Ready" ...
	I0923 10:25:58.637637 2613811 node_ready.go:49] node "addons-895903" has status "Ready":"True"
	I0923 10:25:58.637698 2613811 node_ready.go:38] duration metric: took 3.584182ms for node "addons-895903" to be "Ready" ...
	I0923 10:25:58.637724 2613811 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:25:58.653283 2613811 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace to be "Ready" ...
	I0923 10:25:58.759300 2613811 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:25:58.759377 2613811 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:25:58.770491 2613811 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:25:58.770562 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:25:58.773719 2613811 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:25:58.773789 2613811 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:25:58.955727 2613811 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:25:58.955801 2613811 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:25:59.007992 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:25:59.018121 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.811917434s)
	I0923 10:25:59.047422 2613811 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:25:59.047499 2613811 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:25:59.089439 2613811 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:25:59.089512 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:25:59.137033 2613811 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-895903" context rescaled to 1 replicas
	I0923 10:25:59.211435 2613811 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:25:59.211509 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:25:59.364792 2613811 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:25:59.364872 2613811 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:25:59.417450 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:25:59.515369 2613811 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:25:59.515451 2613811 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:25:59.538207 2613811 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:25:59.538279 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:25:59.786760 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:25:59.906870 2613811 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:25:59.906945 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:26:00.569623 2613811 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:26:00.569698 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:26:00.661782 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:00.795905 2613811 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:26:00.795982 2613811 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:26:00.995911 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:26:03.191608 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:03.655428 2613811 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:26:03.655625 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:26:03.703413 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:26:03.963884 2613811 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:26:04.056152 2613811 addons.go:234] Setting addon gcp-auth=true in "addons-895903"
	I0923 10:26:04.056212 2613811 host.go:66] Checking if "addons-895903" exists ...
	I0923 10:26:04.056728 2613811 cli_runner.go:164] Run: docker container inspect addons-895903 --format={{.State.Status}}
	I0923 10:26:04.091467 2613811 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:26:04.091526 2613811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-895903
	I0923 10:26:04.129679 2613811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41421 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/addons-895903/id_rsa Username:docker}
	I0923 10:26:04.618667 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.336244667s)
	I0923 10:26:04.618813 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.761994872s)
	I0923 10:26:04.618899 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.706835365s)
	I0923 10:26:04.619150 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.682826337s)
	I0923 10:26:04.619226 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.658317732s)
	I0923 10:26:04.619792 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.384374018s)
	I0923 10:26:04.619839 2613811 addons.go:475] Verifying addon ingress=true in "addons-895903"
	I0923 10:26:04.621975 2613811 out.go:177] * Verifying ingress addon...
	I0923 10:26:04.625048 2613811 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 10:26:04.634289 2613811 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 10:26:04.634310 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0923 10:26:04.638210 2613811 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 10:26:05.178969 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:05.235510 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:05.718296 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:06.129991 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:06.244674 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.063937477s)
	I0923 10:26:06.245015 2613811 addons.go:475] Verifying addon registry=true in "addons-895903"
	I0923 10:26:06.244800 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.946416812s)
	I0923 10:26:06.245073 2613811 addons.go:475] Verifying addon metrics-server=true in "addons-895903"
	I0923 10:26:06.244874 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.236802014s)
	W0923 10:26:06.245097 2613811 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:26:06.245112 2613811 retry.go:31] will retry after 245.145851ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:26:06.244905 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.827377937s)
	I0923 10:26:06.244966 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.45813048s)
	I0923 10:26:06.245498 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.250151443s)
	I0923 10:26:06.247359 2613811 out.go:177] * Verifying registry addon...
	I0923 10:26:06.247368 2613811 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-895903 service yakd-dashboard -n yakd-dashboard
	
	I0923 10:26:06.250188 2613811 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:26:06.282464 2613811 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:26:06.282487 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:06.491024 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:26:06.660915 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:06.753880 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:07.157215 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:07.251659 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.255702983s)
	I0923 10:26:07.251687 2613811 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-895903"
	I0923 10:26:07.251820 2613811 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.16032709s)
	I0923 10:26:07.254573 2613811 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:26:07.254673 2613811 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:26:07.256964 2613811 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:26:07.257895 2613811 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:26:07.260326 2613811 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:26:07.260380 2613811 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:26:07.263591 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:07.272175 2613811 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:26:07.272319 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:07.313625 2613811 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:26:07.313691 2613811 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:26:07.390497 2613811 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:26:07.390581 2613811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:26:07.435515 2613811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:26:07.629422 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:07.658905 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:07.754581 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:07.762877 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:07.950427 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.459358869s)
	I0923 10:26:08.129802 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:08.254410 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:08.262930 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:08.460410 2613811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.024850845s)
	I0923 10:26:08.464814 2613811 addons.go:475] Verifying addon gcp-auth=true in "addons-895903"
	I0923 10:26:08.467208 2613811 out.go:177] * Verifying gcp-auth addon...
	I0923 10:26:08.470163 2613811 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:26:08.480806 2613811 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:26:08.630753 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:08.753503 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:08.763260 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:09.130229 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:09.254386 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:09.263544 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:09.630352 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:09.660039 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:09.754429 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:09.763139 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:10.132090 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:10.255380 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:10.263224 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:10.630418 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:10.754715 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:10.764156 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:11.131115 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:11.255079 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:11.265861 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:11.631624 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:11.664074 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:11.757498 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:11.765117 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:12.131567 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:12.254127 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:12.262829 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:12.628877 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:12.754627 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:12.763439 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:13.129564 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:13.254382 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:13.263147 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:13.629834 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:13.667363 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:13.754870 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:13.763479 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:14.129256 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:14.254275 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:14.262721 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:14.629292 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:14.754667 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:14.762190 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:15.130377 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:15.254237 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:15.263552 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:15.629651 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:15.754828 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:15.764127 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:16.141233 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:16.160922 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:16.256752 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:16.263701 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:16.629735 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:16.754944 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:16.762687 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:17.129813 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:17.253853 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:17.262512 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:17.629275 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:17.754651 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:17.763047 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:18.129467 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:18.254456 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:18.262608 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:18.629213 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:18.660533 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:18.754174 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:18.762967 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:19.131471 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:19.254540 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:19.263218 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:19.630409 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:19.753618 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:19.762459 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:20.130037 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:20.254779 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:20.262694 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:20.629127 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:20.753747 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:20.762456 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:21.129187 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:21.160542 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:21.254345 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:21.262737 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:21.631138 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:21.754530 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:21.762591 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:22.129924 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:22.254596 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:22.262807 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:22.629980 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:22.757786 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:22.762801 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:23.129220 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:23.254521 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:23.262350 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:23.630381 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:23.660797 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:23.753829 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:23.762765 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:24.130332 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:24.254688 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:24.263180 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:24.629585 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:24.754393 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:24.762882 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:25.130579 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:25.254219 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:25.263171 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:25.629692 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:25.754874 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:25.762572 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:26.130090 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:26.160302 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:26.260154 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:26.265330 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:26.631033 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:26.754386 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:26.763083 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:27.129110 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:27.254123 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:27.262639 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:27.629670 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:27.754478 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:27.764052 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:28.129915 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:28.254253 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:28.262888 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:28.629344 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:28.659357 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:28.754284 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:28.762718 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:29.130177 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:29.254628 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:29.263460 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:29.629621 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:29.753983 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:29.762718 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:30.131418 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:30.254578 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:30.263661 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:30.629213 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:30.660413 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:30.754541 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:30.762878 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:31.129184 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:31.254795 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:31.263188 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:31.629275 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:31.754229 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:31.763522 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:32.130876 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:32.255212 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:32.262837 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:32.629501 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:32.754612 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:32.762324 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:33.130545 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:33.163364 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:33.266064 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:33.280739 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:33.629572 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:33.754610 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:33.762913 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:34.129568 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:34.253831 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:34.263224 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:34.629901 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:34.754890 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:34.762329 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:35.129904 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:35.254065 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:35.263137 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:35.629089 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:35.660275 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:35.754030 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:35.762581 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:36.130167 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:36.254092 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:36.262710 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:36.630213 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:36.754297 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:36.762991 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:37.130391 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:37.253951 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:37.262193 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:37.630168 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:37.754218 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:37.762682 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:38.129691 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:38.160687 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:38.254339 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:38.263058 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:38.630141 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:38.754156 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:38.762917 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:39.129806 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:39.254340 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:39.262797 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:39.630213 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:39.754538 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:39.762936 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:40.130585 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:40.160767 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:40.254787 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:40.263394 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:40.629901 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:40.753837 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:40.762846 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:41.132544 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:41.255797 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:41.265682 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:41.630251 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:41.754780 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:41.763363 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:42.132276 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:42.171180 2613811 pod_ready.go:103] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"False"
	I0923 10:26:42.261631 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:42.272119 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:42.630092 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:42.659049 2613811 pod_ready.go:93] pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace has status "Ready":"True"
	I0923 10:26:42.659076 2613811 pod_ready.go:82] duration metric: took 44.005713606s for pod "coredns-7c65d6cfc9-j4q7p" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:42.659088 2613811 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-md4fh" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:42.660931 2613811 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-md4fh" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-md4fh" not found
	I0923 10:26:42.660995 2613811 pod_ready.go:82] duration metric: took 1.899553ms for pod "coredns-7c65d6cfc9-md4fh" in "kube-system" namespace to be "Ready" ...
	E0923 10:26:42.661014 2613811 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-md4fh" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-md4fh" not found
	I0923 10:26:42.661021 2613811 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-895903" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:42.666203 2613811 pod_ready.go:93] pod "etcd-addons-895903" in "kube-system" namespace has status "Ready":"True"
	I0923 10:26:42.666226 2613811 pod_ready.go:82] duration metric: took 5.19759ms for pod "etcd-addons-895903" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:42.666242 2613811 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-895903" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:42.671375 2613811 pod_ready.go:93] pod "kube-apiserver-addons-895903" in "kube-system" namespace has status "Ready":"True"
	I0923 10:26:42.671399 2613811 pod_ready.go:82] duration metric: took 5.147227ms for pod "kube-apiserver-addons-895903" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:42.671411 2613811 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-895903" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:42.676355 2613811 pod_ready.go:93] pod "kube-controller-manager-addons-895903" in "kube-system" namespace has status "Ready":"True"
	I0923 10:26:42.676379 2613811 pod_ready.go:82] duration metric: took 4.960692ms for pod "kube-controller-manager-addons-895903" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:42.676392 2613811 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mckj4" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:42.753675 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:42.762426 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:42.856542 2613811 pod_ready.go:93] pod "kube-proxy-mckj4" in "kube-system" namespace has status "Ready":"True"
	I0923 10:26:42.856568 2613811 pod_ready.go:82] duration metric: took 180.16857ms for pod "kube-proxy-mckj4" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:42.856579 2613811 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-895903" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:43.130578 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:43.254455 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:43.257188 2613811 pod_ready.go:93] pod "kube-scheduler-addons-895903" in "kube-system" namespace has status "Ready":"True"
	I0923 10:26:43.257216 2613811 pod_ready.go:82] duration metric: took 400.625966ms for pod "kube-scheduler-addons-895903" in "kube-system" namespace to be "Ready" ...
	I0923 10:26:43.257227 2613811 pod_ready.go:39] duration metric: took 44.619478156s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:26:43.257242 2613811 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:26:43.257308 2613811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:26:43.271122 2613811 api_server.go:72] duration metric: took 47.109366553s to wait for apiserver process to appear ...
	I0923 10:26:43.271146 2613811 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:26:43.271167 2613811 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 10:26:43.280169 2613811 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 10:26:43.281163 2613811 api_server.go:141] control plane version: v1.31.1
	I0923 10:26:43.281190 2613811 api_server.go:131] duration metric: took 10.035944ms to wait for apiserver health ...
	I0923 10:26:43.281199 2613811 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:26:43.356144 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:43.463788 2613811 system_pods.go:59] 18 kube-system pods found
	I0923 10:26:43.463828 2613811 system_pods.go:61] "coredns-7c65d6cfc9-j4q7p" [798cfb79-7676-4cad-b9fb-1af6a1c8291c] Running
	I0923 10:26:43.463837 2613811 system_pods.go:61] "csi-hostpath-attacher-0" [dece228a-f5d6-44a9-86f6-4b0934d95786] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:26:43.463845 2613811 system_pods.go:61] "csi-hostpath-resizer-0" [0b290bf6-b803-4f36-9404-5110fdaed196] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:26:43.463855 2613811 system_pods.go:61] "csi-hostpathplugin-4n4q7" [eca301e7-fca3-486b-8cb3-c08e370c253a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:26:43.463860 2613811 system_pods.go:61] "etcd-addons-895903" [d0fa203b-30c6-467d-966f-e432eade70ab] Running
	I0923 10:26:43.463864 2613811 system_pods.go:61] "kindnet-vj6hj" [247b200b-16cd-4790-afc8-4f1ae7f8a569] Running
	I0923 10:26:43.463868 2613811 system_pods.go:61] "kube-apiserver-addons-895903" [a09526d5-215c-451f-b395-342ea2ada4df] Running
	I0923 10:26:43.463872 2613811 system_pods.go:61] "kube-controller-manager-addons-895903" [f4531424-e4e2-4b14-a36f-140c87b1f59c] Running
	I0923 10:26:43.463877 2613811 system_pods.go:61] "kube-ingress-dns-minikube" [5cd5ffcb-7820-43fa-b964-0b987c301620] Running
	I0923 10:26:43.463881 2613811 system_pods.go:61] "kube-proxy-mckj4" [20b8dd4c-d9fe-4395-acf6-d0db1dfb38e5] Running
	I0923 10:26:43.463885 2613811 system_pods.go:61] "kube-scheduler-addons-895903" [09e493cd-0e0b-441c-94c1-84a4b8584c50] Running
	I0923 10:26:43.463893 2613811 system_pods.go:61] "metrics-server-84c5f94fbc-jw4gj" [ce212bd3-d200-4f28-ad95-8a82fcaa0703] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:26:43.463934 2613811 system_pods.go:61] "nvidia-device-plugin-daemonset-r7wk4" [139101f4-490e-4130-90f0-4341fcfd1afb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 10:26:43.463941 2613811 system_pods.go:61] "registry-66c9cd494c-jwrzn" [b939f687-74b6-4a54-9a56-07aa57ae0752] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:26:43.463951 2613811 system_pods.go:61] "registry-proxy-skfcc" [7874c975-6ab8-4813-bd94-94c3ecf85327] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:26:43.463958 2613811 system_pods.go:61] "snapshot-controller-56fcc65765-57fj2" [9aae62e1-0a42-406c-a542-2162595178b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:26:43.463967 2613811 system_pods.go:61] "snapshot-controller-56fcc65765-jt2fx" [049df3ed-990e-4945-8bf9-9b93dc68dad2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:26:43.463972 2613811 system_pods.go:61] "storage-provisioner" [b358446f-3d09-4444-ac88-c5483b59d295] Running
	I0923 10:26:43.463977 2613811 system_pods.go:74] duration metric: took 182.735664ms to wait for pod list to return data ...
	I0923 10:26:43.463985 2613811 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:26:43.630287 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:43.657370 2613811 default_sa.go:45] found service account: "default"
	I0923 10:26:43.657396 2613811 default_sa.go:55] duration metric: took 193.399905ms for default service account to be created ...
	I0923 10:26:43.657406 2613811 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:26:43.753730 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:43.763061 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:43.868449 2613811 system_pods.go:86] 18 kube-system pods found
	I0923 10:26:43.868538 2613811 system_pods.go:89] "coredns-7c65d6cfc9-j4q7p" [798cfb79-7676-4cad-b9fb-1af6a1c8291c] Running
	I0923 10:26:43.868568 2613811 system_pods.go:89] "csi-hostpath-attacher-0" [dece228a-f5d6-44a9-86f6-4b0934d95786] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:26:43.868604 2613811 system_pods.go:89] "csi-hostpath-resizer-0" [0b290bf6-b803-4f36-9404-5110fdaed196] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:26:43.868633 2613811 system_pods.go:89] "csi-hostpathplugin-4n4q7" [eca301e7-fca3-486b-8cb3-c08e370c253a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:26:43.868654 2613811 system_pods.go:89] "etcd-addons-895903" [d0fa203b-30c6-467d-966f-e432eade70ab] Running
	I0923 10:26:43.868671 2613811 system_pods.go:89] "kindnet-vj6hj" [247b200b-16cd-4790-afc8-4f1ae7f8a569] Running
	I0923 10:26:43.868676 2613811 system_pods.go:89] "kube-apiserver-addons-895903" [a09526d5-215c-451f-b395-342ea2ada4df] Running
	I0923 10:26:43.868681 2613811 system_pods.go:89] "kube-controller-manager-addons-895903" [f4531424-e4e2-4b14-a36f-140c87b1f59c] Running
	I0923 10:26:43.868685 2613811 system_pods.go:89] "kube-ingress-dns-minikube" [5cd5ffcb-7820-43fa-b964-0b987c301620] Running
	I0923 10:26:43.868689 2613811 system_pods.go:89] "kube-proxy-mckj4" [20b8dd4c-d9fe-4395-acf6-d0db1dfb38e5] Running
	I0923 10:26:43.868693 2613811 system_pods.go:89] "kube-scheduler-addons-895903" [09e493cd-0e0b-441c-94c1-84a4b8584c50] Running
	I0923 10:26:43.868713 2613811 system_pods.go:89] "metrics-server-84c5f94fbc-jw4gj" [ce212bd3-d200-4f28-ad95-8a82fcaa0703] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:26:43.868721 2613811 system_pods.go:89] "nvidia-device-plugin-daemonset-r7wk4" [139101f4-490e-4130-90f0-4341fcfd1afb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 10:26:43.868728 2613811 system_pods.go:89] "registry-66c9cd494c-jwrzn" [b939f687-74b6-4a54-9a56-07aa57ae0752] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:26:43.868735 2613811 system_pods.go:89] "registry-proxy-skfcc" [7874c975-6ab8-4813-bd94-94c3ecf85327] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:26:43.868741 2613811 system_pods.go:89] "snapshot-controller-56fcc65765-57fj2" [9aae62e1-0a42-406c-a542-2162595178b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:26:43.868747 2613811 system_pods.go:89] "snapshot-controller-56fcc65765-jt2fx" [049df3ed-990e-4945-8bf9-9b93dc68dad2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:26:43.868751 2613811 system_pods.go:89] "storage-provisioner" [b358446f-3d09-4444-ac88-c5483b59d295] Running
	I0923 10:26:43.868759 2613811 system_pods.go:126] duration metric: took 211.34738ms to wait for k8s-apps to be running ...
	I0923 10:26:43.868767 2613811 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:26:43.868826 2613811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:26:43.886247 2613811 system_svc.go:56] duration metric: took 17.469734ms WaitForService to wait for kubelet
	I0923 10:26:43.886327 2613811 kubeadm.go:582] duration metric: took 47.724576505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:26:43.886361 2613811 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:26:44.059524 2613811 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 10:26:44.059559 2613811 node_conditions.go:123] node cpu capacity is 2
	I0923 10:26:44.059572 2613811 node_conditions.go:105] duration metric: took 173.174146ms to run NodePressure ...
	I0923 10:26:44.059586 2613811 start.go:241] waiting for startup goroutines ...
	I0923 10:26:44.129792 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:44.254862 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:44.262502 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:44.629808 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:44.754891 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:44.764260 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:45.238315 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:45.259505 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:45.265245 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:45.631176 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:45.754370 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:45.763465 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:46.129150 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:46.254243 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:46.263055 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:46.629662 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:46.754729 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:46.762428 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:47.130723 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:47.254234 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:47.262838 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:47.630557 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:47.754367 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:47.762410 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:48.130421 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:48.256424 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:48.263273 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:48.629656 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:48.754389 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:48.763202 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:49.129597 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:49.258385 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:49.263057 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:49.629405 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:49.754752 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:49.763489 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:50.130365 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:50.253686 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:50.262438 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:50.637444 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:50.754898 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:50.763200 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:51.130619 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:51.254476 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:51.263407 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:51.630500 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:51.754821 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:51.763207 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:52.130148 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:52.253765 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:52.262762 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:52.630536 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:52.754539 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:52.763596 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:53.129257 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:53.255061 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:53.263778 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:53.630040 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:53.753538 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:53.763110 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:54.130705 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:54.255407 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:54.264478 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:54.630595 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:54.754461 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:54.763182 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:55.130164 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:55.253548 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:55.262878 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:55.632385 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:55.754518 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:55.762994 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:56.132422 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:56.254591 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:56.262993 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:56.630110 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:56.754868 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:56.764137 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:57.129038 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:57.253886 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:57.262513 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:57.629553 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:57.754268 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:57.762790 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:58.129213 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:58.253997 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:58.262928 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:58.629390 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:58.754239 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:58.765318 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:59.130031 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:59.254927 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:59.262924 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:59.630275 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:59.754281 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:59.762510 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:00.177392 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:00.279387 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:00.280065 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:27:00.629497 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:00.754383 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:27:00.763186 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:01.130889 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:01.255764 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:27:01.263786 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:01.629500 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:01.754419 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:27:01.763140 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:02.130590 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:02.254458 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:27:02.263675 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:02.629712 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:02.757096 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:27:02.766157 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:03.129745 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:03.254665 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:27:03.263940 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:03.630508 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:03.755569 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:27:03.763376 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:04.129584 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:04.253932 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:27:04.262116 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:04.629789 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:04.755970 2613811 kapi.go:107] duration metric: took 58.505778932s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:27:04.762399 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:05.130858 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:05.263960 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:05.632313 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:05.766639 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:06.131465 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:06.271245 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:06.630399 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:06.771244 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:07.131316 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:07.263944 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:07.631016 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:07.767925 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:08.129795 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:08.264098 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:08.629817 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:08.764356 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:09.130366 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:09.263203 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:09.629582 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:09.763171 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:10.130714 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:10.263498 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:10.630305 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:10.762687 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:11.131496 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:11.265340 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:11.630423 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:11.764088 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:12.129824 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:12.264965 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:12.631076 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:12.762930 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:13.130490 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:13.263971 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:13.630166 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:13.763582 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:14.129857 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:14.262960 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:14.630434 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:14.762814 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:15.130277 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:15.262765 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:15.630257 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:15.763026 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:16.130924 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:16.264173 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:16.632198 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:16.763221 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:17.131558 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:17.263571 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:17.631024 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:17.763471 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:18.130284 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:18.263558 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:18.630696 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:18.765310 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:19.131012 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:19.262396 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:19.630126 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:19.766506 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:20.139913 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:20.262763 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:20.629614 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:20.763356 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:21.129500 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:21.263580 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:21.633562 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:21.762711 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:22.129464 2613811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:22.264976 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:22.633733 2613811 kapi.go:107] duration metric: took 1m18.008678344s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 10:27:22.768316 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:23.263872 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:23.763788 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:24.262330 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:24.763509 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:25.264056 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:25.762820 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:26.263562 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:26.762987 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:27.263482 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:27.764338 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:28.263321 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:28.763622 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:29.262898 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:29.763019 2613811 kapi.go:107] duration metric: took 1m22.50512239s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:28:52.474069 2613811 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:28:52.474092 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:52.974235 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:53.474876 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:53.973907 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:54.473873 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:54.973502 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:55.475037 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:55.973452 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:56.474267 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:56.974006 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:57.473831 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:57.973855 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:58.473523 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:58.975054 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:59.473744 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:28:59.973434 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:00.475024 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:00.974427 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:01.475245 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:01.974397 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:02.474005 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:02.973329 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:03.474310 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:03.974463 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:04.474068 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:04.973296 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:05.474462 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:05.974345 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:06.473533 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:06.974277 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:07.474223 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:07.974520 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:08.475134 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:08.974605 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:09.474519 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:09.974367 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:10.474567 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:10.974587 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:11.476547 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:11.976413 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:12.474375 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:12.974033 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:13.473784 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:13.974352 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:14.474122 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:14.973488 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:15.474166 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:15.974712 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:16.473896 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:16.974171 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:17.474441 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:17.974552 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:18.474437 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:18.973907 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:19.474667 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:19.973880 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:20.474600 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:20.974647 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:21.473747 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:21.973462 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:22.474409 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:22.974628 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:23.475207 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:23.974279 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:24.474619 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:24.975242 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:25.474365 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:25.974694 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:26.473618 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:26.974192 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:27.474452 2613811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:27.980932 2613811 kapi.go:107] duration metric: took 3m19.510767062s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:29:27.983108 2613811 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-895903 cluster.
	I0923 10:29:27.985420 2613811 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:29:27.988002 2613811 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:29:27.989952 2613811 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, default-storageclass, metrics-server, inspektor-gadget, volcano, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0923 10:29:27.991718 2613811 addons.go:510] duration metric: took 3m31.829532617s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns default-storageclass metrics-server inspektor-gadget volcano yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0923 10:29:27.991772 2613811 start.go:246] waiting for cluster config update ...
	I0923 10:29:27.991794 2613811 start.go:255] writing updated cluster config ...
	I0923 10:29:27.993157 2613811 ssh_runner.go:195] Run: rm -f paused
	I0923 10:29:28.353215 2613811 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:29:28.356276 2613811 out.go:177] * Done! kubectl is now configured to use "addons-895903" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	612a75b16eaa6       4f725bf50aaa5       20 seconds ago      Exited              gadget                                   6                   b36c01d93c225       gadget-wbbdg
	d64e1092b174b       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   2c7151562401b       gcp-auth-89d5ffd79-pbtsl
	19fe0f8a2dbea       8b46b1cd48760       4 minutes ago       Running             admission                                0                   68e8f731f0fbe       volcano-admission-77d7d48b68-gs99c
	a131c9e17bbe0       d9c7ad4c226bf       4 minutes ago       Running             volcano-scheduler                        1                   111067311d9a1       volcano-scheduler-576bc46687-knktf
	1bcf5965ba7a4       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   131a911860624       csi-hostpathplugin-4n4q7
	61d90f5072131       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   131a911860624       csi-hostpathplugin-4n4q7
	e07884af092cf       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   131a911860624       csi-hostpathplugin-4n4q7
	4c9c6f02628d2       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   131a911860624       csi-hostpathplugin-4n4q7
	3d1f1b0a2aeb9       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   131a911860624       csi-hostpathplugin-4n4q7
	75bb7400b98ed       289a818c8d9c5       5 minutes ago       Running             controller                               0                   9bf1ad1aed165       ingress-nginx-controller-bc57996ff-5nsw5
	b0db1ee518ec6       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   eba7cd278b314       csi-hostpath-attacher-0
	6a84ffbec5111       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   131a911860624       csi-hostpathplugin-4n4q7
	fc532f382df3e       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   b28ef9d7882c9       cloud-spanner-emulator-5b584cc74-gmnrx
	c7bcc56a614d9       420193b27261a       5 minutes ago       Exited              patch                                    2                   0120424b37102       ingress-nginx-admission-patch-2trv2
	b5d0935a141fc       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   d1b56ec47f61c       csi-hostpath-resizer-0
	ee516ac731cff       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   9331c98f21379       snapshot-controller-56fcc65765-jt2fx
	3b86988e9cf3c       420193b27261a       5 minutes ago       Exited              create                                   0                   3895921c532eb       ingress-nginx-admission-create-b8hdz
	cf227c2de58f6       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   844a794e5b942       metrics-server-84c5f94fbc-jw4gj
	2902a0355b5f4       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   f8e1da6952484       registry-proxy-skfcc
	108a0ef115107       d9c7ad4c226bf       5 minutes ago       Exited              volcano-scheduler                        0                   111067311d9a1       volcano-scheduler-576bc46687-knktf
	d41f0984aac27       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   837be611557c2       registry-66c9cd494c-jwrzn
	509caddf748c5       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   d624b0a9b41e6       volcano-controllers-56675bb4d5-724bn
	0b10cafbd33b0       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   93e843d7f5f3a       snapshot-controller-56fcc65765-57fj2
	a87b32efcf3b1       77bdba588b953       5 minutes ago       Running             yakd                                     0                   62102cc3a33bb       yakd-dashboard-67d98fc6b-5wqrb
	9e45b0eeea0f3       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   d9d4daea0c5c8       nvidia-device-plugin-daemonset-r7wk4
	ddf731fcc9c15       7ce2150c8929b       6 minutes ago       Running             local-path-provisioner                   0                   28d07c1bf80fc       local-path-provisioner-86d989889c-q6rc8
	86182a62ab42c       2f6c962e7b831       6 minutes ago       Running             coredns                                  0                   54183a39d0309       coredns-7c65d6cfc9-j4q7p
	b8c756c0cb163       35508c2f890c4       6 minutes ago       Running             minikube-ingress-dns                     0                   ee964fd464813       kube-ingress-dns-minikube
	48f66ea8bb65e       ba04bb24b9575       6 minutes ago       Running             storage-provisioner                      0                   b02dcb7d63557       storage-provisioner
	39ee3c472b520       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   6be4e59c7de3d       kindnet-vj6hj
	05c518285c3c1       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   369971865bac9       kube-proxy-mckj4
	74de3e6b43598       279f381cb3736       7 minutes ago       Running             kube-controller-manager                  0                   2867182cccf7a       kube-controller-manager-addons-895903
	22ca1983779c4       d3f53a98c0a9d       7 minutes ago       Running             kube-apiserver                           0                   57a3be5181e6d       kube-apiserver-addons-895903
	20eeff7220c81       7f8aa378bb47d       7 minutes ago       Running             kube-scheduler                           0                   279842e216d5c       kube-scheduler-addons-895903
	6bdf993a2a01b       27e3830e14027       7 minutes ago       Running             etcd                                     0                   14642ca01a85f       etcd-addons-895903
	
	
	==> containerd <==
	Sep 23 10:29:51 addons-895903 containerd[816]: time="2024-09-23T10:29:51.970785687Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"645fe9652862ba06d1df620c6d024160106c085bedacae20caf1d59db57dfdcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 23 10:29:51 addons-895903 containerd[816]: time="2024-09-23T10:29:51.970904727Z" level=info msg="RemovePodSandbox \"645fe9652862ba06d1df620c6d024160106c085bedacae20caf1d59db57dfdcd\" returns successfully"
	Sep 23 10:32:26 addons-895903 containerd[816]: time="2024-09-23T10:32:26.851770118Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 23 10:32:26 addons-895903 containerd[816]: time="2024-09-23T10:32:26.970101700Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 23 10:32:26 addons-895903 containerd[816]: time="2024-09-23T10:32:26.971729000Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 23 10:32:26 addons-895903 containerd[816]: time="2024-09-23T10:32:26.979747009Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 127.924371ms"
	Sep 23 10:32:26 addons-895903 containerd[816]: time="2024-09-23T10:32:26.979799554Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 23 10:32:26 addons-895903 containerd[816]: time="2024-09-23T10:32:26.981937514Z" level=info msg="CreateContainer within sandbox \"b36c01d93c2251fc2dab2eeac856796549bfd8df74b1033529c70c60cdf2deb8\" for container &ContainerMetadata{Name:gadget,Attempt:6,}"
	Sep 23 10:32:27 addons-895903 containerd[816]: time="2024-09-23T10:32:27.002714336Z" level=info msg="CreateContainer within sandbox \"b36c01d93c2251fc2dab2eeac856796549bfd8df74b1033529c70c60cdf2deb8\" for &ContainerMetadata{Name:gadget,Attempt:6,} returns container id \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\""
	Sep 23 10:32:27 addons-895903 containerd[816]: time="2024-09-23T10:32:27.004469299Z" level=info msg="StartContainer for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\""
	Sep 23 10:32:27 addons-895903 containerd[816]: time="2024-09-23T10:32:27.078446830Z" level=info msg="StartContainer for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\" returns successfully"
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.475152513Z" level=error msg="ExecSync for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\" failed" error="failed to exec in container: failed to start exec \"c80a7c3ea0263b3416e539d3cdac21c07c6aec6f3a400a10dc8f987b235d799e\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.497443930Z" level=error msg="ExecSync for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\" failed" error="failed to exec in container: failed to start exec \"43ccd234594dc0896703f6fb4172453942e4523f182db0847c54005a5738e903\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.513837993Z" level=error msg="ExecSync for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\" failed" error="failed to exec in container: failed to start exec \"6fa8736af13b02c58bb50de73d5c5096157a874325886631043f90104fe2f4d5\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.622591712Z" level=info msg="shim disconnected" id=612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a namespace=k8s.io
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.622659199Z" level=warning msg="cleaning up after shim disconnected" id=612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a namespace=k8s.io
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.622670686Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.772272650Z" level=error msg="ExecSync for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.772283120Z" level=error msg="ExecSync for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.773009312Z" level=error msg="ExecSync for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.773169040Z" level=error msg="ExecSync for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.773665325Z" level=error msg="ExecSync for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 23 10:32:28 addons-895903 containerd[816]: time="2024-09-23T10:32:28.773827630Z" level=error msg="ExecSync for \"612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 23 10:32:29 addons-895903 containerd[816]: time="2024-09-23T10:32:29.464846874Z" level=info msg="RemoveContainer for \"6280fad4a0f78fa1524533f83c2e2459fdbf39aba281ccdc406877f217ca79e0\""
	Sep 23 10:32:29 addons-895903 containerd[816]: time="2024-09-23T10:32:29.472001921Z" level=info msg="RemoveContainer for \"6280fad4a0f78fa1524533f83c2e2459fdbf39aba281ccdc406877f217ca79e0\" returns successfully"
	
	
	==> coredns [86182a62ab42cab4656a47799401eedd37f3a84c28eef105d167be5212f33291] <==
	[INFO] 10.244.0.8:58722 - 53129 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105846s
	[INFO] 10.244.0.8:53700 - 45770 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003238969s
	[INFO] 10.244.0.8:53700 - 20436 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002600826s
	[INFO] 10.244.0.8:35138 - 36930 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154419s
	[INFO] 10.244.0.8:35138 - 51276 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000112131s
	[INFO] 10.244.0.8:56982 - 48456 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000146247s
	[INFO] 10.244.0.8:56982 - 15183 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000222719s
	[INFO] 10.244.0.8:35084 - 2037 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077957s
	[INFO] 10.244.0.8:35084 - 21751 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093054s
	[INFO] 10.244.0.8:54347 - 41389 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077391s
	[INFO] 10.244.0.8:54347 - 17576 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080327s
	[INFO] 10.244.0.8:33234 - 65497 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.011172252s
	[INFO] 10.244.0.8:33234 - 6886 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.015495375s
	[INFO] 10.244.0.8:39269 - 12238 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083848s
	[INFO] 10.244.0.8:39269 - 16845 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049189s
	[INFO] 10.244.0.24:47228 - 49608 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184729s
	[INFO] 10.244.0.24:51930 - 51213 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000383654s
	[INFO] 10.244.0.24:34754 - 25619 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013696s
	[INFO] 10.244.0.24:59634 - 2973 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000354049s
	[INFO] 10.244.0.24:46673 - 37287 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000166359s
	[INFO] 10.244.0.24:56562 - 42301 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112189s
	[INFO] 10.244.0.24:43030 - 59880 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002345663s
	[INFO] 10.244.0.24:35327 - 21806 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001811266s
	[INFO] 10.244.0.24:56902 - 56902 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001948381s
	[INFO] 10.244.0.24:38096 - 60288 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001819373s
	
	
	==> describe nodes <==
	Name:               addons-895903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-895903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=addons-895903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_25_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-895903
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-895903"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:25:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-895903
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:32:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:29:56 +0000   Mon, 23 Sep 2024 10:25:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:29:56 +0000   Mon, 23 Sep 2024 10:25:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:29:56 +0000   Mon, 23 Sep 2024 10:25:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:29:56 +0000   Mon, 23 Sep 2024 10:25:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-895903
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6624dc2cfbb4e6f9ec51a72c242076a
	  System UUID:                bfb89664-39e7-4869-9292-c0198371ac0f
	  Boot ID:                    d8899273-2c3a-49f7-8c9a-66d2209373ba
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-gmnrx      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  gadget                      gadget-wbbdg                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  gcp-auth                    gcp-auth-89d5ffd79-pbtsl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-5nsw5    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m43s
	  kube-system                 coredns-7c65d6cfc9-j4q7p                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m50s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 csi-hostpathplugin-4n4q7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 etcd-addons-895903                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m55s
	  kube-system                 kindnet-vj6hj                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m51s
	  kube-system                 kube-apiserver-addons-895903                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m55s
	  kube-system                 kube-controller-manager-addons-895903       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m55s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 kube-proxy-mckj4                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-scheduler-addons-895903                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m55s
	  kube-system                 metrics-server-84c5f94fbc-jw4gj             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m45s
	  kube-system                 nvidia-device-plugin-daemonset-r7wk4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 registry-66c9cd494c-jwrzn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 registry-proxy-skfcc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 snapshot-controller-56fcc65765-57fj2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 snapshot-controller-56fcc65765-jt2fx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  local-path-storage          local-path-provisioner-86d989889c-q6rc8     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  volcano-system              volcano-admission-77d7d48b68-gs99c          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  volcano-system              volcano-controllers-56675bb4d5-724bn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  volcano-system              volcano-scheduler-576bc46687-knktf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-5wqrb              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 6m49s                kube-proxy       
	  Normal   NodeHasSufficientMemory  7m3s (x8 over 7m3s)  kubelet          Node addons-895903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m3s (x7 over 7m3s)  kubelet          Node addons-895903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m3s (x7 over 7m3s)  kubelet          Node addons-895903 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m56s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m56s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m55s                kubelet          Node addons-895903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m55s                kubelet          Node addons-895903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m55s                kubelet          Node addons-895903 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m52s                node-controller  Node addons-895903 event: Registered Node addons-895903 in Controller
	
	
	==> dmesg <==
	[Sep23 08:17] systemd-journald[222]: Failed to send WATCHDOG=1 notification message: Connection refused
	
	
	==> etcd [6bdf993a2a01b8852ec382a7cc4b78988ba13dbd216a009cb59b93fa78da01cc] <==
	{"level":"info","ts":"2024-09-23T10:25:45.516062Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-23T10:25:45.516087Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-23T10:25:45.517141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-23T10:25:45.517222Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-23T10:25:45.515270Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T10:25:45.883309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-23T10:25:45.883524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T10:25:45.883622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-23T10:25:45.883732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T10:25:45.883822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T10:25:45.883906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-23T10:25:45.883996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T10:25:45.891408Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:25:45.895498Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-895903 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T10:25:45.899356Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:25:45.899867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:25:45.900141Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:25:45.900248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:25:45.900992Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:25:45.901586Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:25:45.901675Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:25:45.901705Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:25:45.923302Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T10:25:45.949917Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:25:45.968048Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [d64e1092b174b0340ee1b99592a30c89cc36c0f60856509a345b7f8de22c4dba] <==
	2024/09/23 10:29:27 GCP Auth Webhook started!
	2024/09/23 10:29:44 Ready to marshal response ...
	2024/09/23 10:29:44 Ready to write response ...
	2024/09/23 10:29:45 Ready to marshal response ...
	2024/09/23 10:29:45 Ready to write response ...
	
	
	==> kernel <==
	 10:32:47 up 1 day, 18:15,  0 users,  load average: 0.24, 1.19, 1.99
	Linux addons-895903 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [39ee3c472b5203440f2df1e97a66266d071ac3ee846e391cddd51fefc540df02] <==
	I0923 10:30:38.115875       1 main.go:299] handling current node
	I0923 10:30:48.124737       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:30:48.124776       1 main.go:299] handling current node
	I0923 10:30:58.115224       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:30:58.115260       1 main.go:299] handling current node
	I0923 10:31:08.119412       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:31:08.119446       1 main.go:299] handling current node
	I0923 10:31:18.115269       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:31:18.115508       1 main.go:299] handling current node
	I0923 10:31:28.115086       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:31:28.115215       1 main.go:299] handling current node
	I0923 10:31:38.119432       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:31:38.119493       1 main.go:299] handling current node
	I0923 10:31:48.123429       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:31:48.123472       1 main.go:299] handling current node
	I0923 10:31:58.115564       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:31:58.115595       1 main.go:299] handling current node
	I0923 10:32:08.115645       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:32:08.115683       1 main.go:299] handling current node
	I0923 10:32:18.115049       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:32:18.115089       1 main.go:299] handling current node
	I0923 10:32:28.116287       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:32:28.116326       1 main.go:299] handling current node
	I0923 10:32:38.118445       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:32:38.118481       1 main.go:299] handling current node
	
	
	==> kube-apiserver [22ca1983779c47b817279a341b9cd9e116968ffbb124cd7faf35dd235c858eec] <==
	W0923 10:28:11.507809       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:12.492935       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:13.507606       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:14.601165       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:15.689787       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:16.770637       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:17.853788       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:18.888383       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:19.944223       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:20.975458       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:21.990927       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:23.078749       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:24.166603       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:25.197576       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:26.230489       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:27.298326       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:28.383426       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.13.255:443: connect: connection refused
	W0923 10:28:52.349542       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.159.173:443: connect: connection refused
	E0923 10:28:52.349589       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.159.173:443: connect: connection refused" logger="UnhandledError"
	W0923 10:29:11.474390       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.159.173:443: connect: connection refused
	E0923 10:29:11.474432       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.159.173:443: connect: connection refused" logger="UnhandledError"
	W0923 10:29:11.515013       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.159.173:443: connect: connection refused
	E0923 10:29:11.515224       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.159.173:443: connect: connection refused" logger="UnhandledError"
	I0923 10:29:44.868849       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0923 10:29:44.903934       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [74de3e6b43598eb398a73e476abb3b74df7078c623fb5eaaab2f36b209edd89d] <==
	I0923 10:29:11.499517       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 10:29:11.500316       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 10:29:11.511073       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 10:29:11.523064       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 10:29:11.530477       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 10:29:11.534519       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 10:29:11.549067       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 10:29:12.912443       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 10:29:12.926372       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 10:29:14.031736       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 10:29:14.051957       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 10:29:15.039852       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 10:29:15.050830       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 10:29:15.060900       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 10:29:15.062543       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 10:29:15.075442       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 10:29:15.086137       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 10:29:27.992533       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="15.549078ms"
	I0923 10:29:27.992709       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="107.076µs"
	I0923 10:29:44.614085       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0923 10:29:45.050592       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0923 10:29:45.057803       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0923 10:29:45.135043       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0923 10:29:45.137476       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0923 10:29:56.101392       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-895903"
	
	
	==> kube-proxy [05c518285c3c1fd4d62e8ddf3baeddeb21ec8678340753bce2c13ffd08f99f3c] <==
	I0923 10:25:57.784541       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:25:57.880569       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 10:25:57.880642       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:25:57.933379       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:25:57.933727       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:25:57.949526       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:25:57.950265       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:25:57.950291       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:25:57.956932       1 config.go:199] "Starting service config controller"
	I0923 10:25:57.956968       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:25:57.957007       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:25:57.957012       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:25:57.968016       1 config.go:328] "Starting node config controller"
	I0923 10:25:57.968057       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:25:58.057811       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:25:58.057876       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:25:58.068351       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [20eeff7220c81f507d6a104120551da6efb5343427ce72308d50f0e571eed1c5] <==
	W0923 10:25:49.279989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 10:25:49.280009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:25:49.280098       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:25:49.280121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:25:49.278786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:25:49.280156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:25:49.279067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:25:49.280194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:25:49.280358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 10:25:49.280505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:25:50.093178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:25:50.093757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:25:50.128661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 10:25:50.128915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:25:50.278608       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:25:50.278655       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 10:25:50.283303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:25:50.283355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:25:50.315575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:25:50.315620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:25:50.331435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:25:50.331673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:25:50.413909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:25:50.414032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:25:52.973710       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:31:49 addons-895903 kubelet[1477]: I0923 10:31:49.850235    1477 scope.go:117] "RemoveContainer" containerID="6280fad4a0f78fa1524533f83c2e2459fdbf39aba281ccdc406877f217ca79e0"
	Sep 23 10:31:49 addons-895903 kubelet[1477]: E0923 10:31:49.850428    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-wbbdg_gadget(da6e2bf1-1c63-4e4f-93e1-244f156ae28b)\"" pod="gadget/gadget-wbbdg" podUID="da6e2bf1-1c63-4e4f-93e1-244f156ae28b"
	Sep 23 10:31:51 addons-895903 kubelet[1477]: I0923 10:31:51.851059    1477 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-skfcc" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 10:32:00 addons-895903 kubelet[1477]: I0923 10:32:00.849798    1477 scope.go:117] "RemoveContainer" containerID="6280fad4a0f78fa1524533f83c2e2459fdbf39aba281ccdc406877f217ca79e0"
	Sep 23 10:32:00 addons-895903 kubelet[1477]: E0923 10:32:00.850001    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-wbbdg_gadget(da6e2bf1-1c63-4e4f-93e1-244f156ae28b)\"" pod="gadget/gadget-wbbdg" podUID="da6e2bf1-1c63-4e4f-93e1-244f156ae28b"
	Sep 23 10:32:02 addons-895903 kubelet[1477]: I0923 10:32:02.849454    1477 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-jwrzn" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 10:32:03 addons-895903 kubelet[1477]: I0923 10:32:03.849558    1477 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-r7wk4" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 10:32:13 addons-895903 kubelet[1477]: I0923 10:32:13.849884    1477 scope.go:117] "RemoveContainer" containerID="6280fad4a0f78fa1524533f83c2e2459fdbf39aba281ccdc406877f217ca79e0"
	Sep 23 10:32:13 addons-895903 kubelet[1477]: E0923 10:32:13.850582    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-wbbdg_gadget(da6e2bf1-1c63-4e4f-93e1-244f156ae28b)\"" pod="gadget/gadget-wbbdg" podUID="da6e2bf1-1c63-4e4f-93e1-244f156ae28b"
	Sep 23 10:32:26 addons-895903 kubelet[1477]: I0923 10:32:26.850294    1477 scope.go:117] "RemoveContainer" containerID="6280fad4a0f78fa1524533f83c2e2459fdbf39aba281ccdc406877f217ca79e0"
	Sep 23 10:32:28 addons-895903 kubelet[1477]: E0923 10:32:28.475501    1477 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"c80a7c3ea0263b3416e539d3cdac21c07c6aec6f3a400a10dc8f987b235d799e\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 10:32:28 addons-895903 kubelet[1477]: E0923 10:32:28.497871    1477 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"43ccd234594dc0896703f6fb4172453942e4523f182db0847c54005a5738e903\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 10:32:28 addons-895903 kubelet[1477]: E0923 10:32:28.514135    1477 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"6fa8736af13b02c58bb50de73d5c5096157a874325886631043f90104fe2f4d5\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 10:32:28 addons-895903 kubelet[1477]: E0923 10:32:28.772496    1477 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 10:32:28 addons-895903 kubelet[1477]: E0923 10:32:28.772630    1477 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 10:32:28 addons-895903 kubelet[1477]: E0923 10:32:28.773160    1477 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 10:32:28 addons-895903 kubelet[1477]: E0923 10:32:28.773459    1477 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 10:32:28 addons-895903 kubelet[1477]: E0923 10:32:28.773843    1477 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 10:32:28 addons-895903 kubelet[1477]: E0923 10:32:28.774261    1477 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 10:32:29 addons-895903 kubelet[1477]: I0923 10:32:29.463148    1477 scope.go:117] "RemoveContainer" containerID="6280fad4a0f78fa1524533f83c2e2459fdbf39aba281ccdc406877f217ca79e0"
	Sep 23 10:32:29 addons-895903 kubelet[1477]: I0923 10:32:29.463741    1477 scope.go:117] "RemoveContainer" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a"
	Sep 23 10:32:29 addons-895903 kubelet[1477]: E0923 10:32:29.463924    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-wbbdg_gadget(da6e2bf1-1c63-4e4f-93e1-244f156ae28b)\"" pod="gadget/gadget-wbbdg" podUID="da6e2bf1-1c63-4e4f-93e1-244f156ae28b"
	Sep 23 10:32:33 addons-895903 kubelet[1477]: I0923 10:32:33.772343    1477 scope.go:117] "RemoveContainer" containerID="612a75b16eaa6621cce0021d0e532b8b85d6d3f64846c7e9090af38b638f4e8a"
	Sep 23 10:32:33 addons-895903 kubelet[1477]: E0923 10:32:33.773007    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-wbbdg_gadget(da6e2bf1-1c63-4e4f-93e1-244f156ae28b)\"" pod="gadget/gadget-wbbdg" podUID="da6e2bf1-1c63-4e4f-93e1-244f156ae28b"
	Sep 23 10:32:46 addons-895903 kubelet[1477]: I0923 10:32:46.849951    1477 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-j4q7p" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [48f66ea8bb65e78e4bdacefce8027301bcca563203c0539a572141675ab9cc9d] <==
	I0923 10:26:01.860414       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:26:01.872035       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:26:01.872123       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:26:01.885264       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:26:01.885433       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-895903_08ed9cf5-4a03-4805-9a01-3039ef1cf3c9!
	I0923 10:26:01.885501       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21b594b8-25eb-4902-bf82-fa304e2a6479", APIVersion:"v1", ResourceVersion:"530", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-895903_08ed9cf5-4a03-4805-9a01-3039ef1cf3c9 became leader
	I0923 10:26:01.985713       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-895903_08ed9cf5-4a03-4805-9a01-3039ef1cf3c9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-895903 -n addons-895903
helpers_test.go:261: (dbg) Run:  kubectl --context addons-895903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-b8hdz ingress-nginx-admission-patch-2trv2 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-895903 describe pod ingress-nginx-admission-create-b8hdz ingress-nginx-admission-patch-2trv2 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-895903 describe pod ingress-nginx-admission-create-b8hdz ingress-nginx-admission-patch-2trv2 test-job-nginx-0: exit status 1 (96.465255ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-b8hdz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2trv2" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-895903 describe pod ingress-nginx-admission-create-b8hdz ingress-nginx-admission-patch-2trv2 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.11s)

                                                
                                    

Test pass (299/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.69
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.87
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 260.77
31 TestAddons/serial/GCPAuth/Namespaces 0.23
33 TestAddons/parallel/Registry 16.16
34 TestAddons/parallel/Ingress 20.66
35 TestAddons/parallel/InspektorGadget 11.11
36 TestAddons/parallel/MetricsServer 7.07
38 TestAddons/parallel/CSI 55.5
39 TestAddons/parallel/Headlamp 16.34
40 TestAddons/parallel/CloudSpanner 6.74
41 TestAddons/parallel/LocalPath 8.84
42 TestAddons/parallel/NvidiaDevicePlugin 5.68
43 TestAddons/parallel/Yakd 11.94
44 TestAddons/StoppedEnableDisable 12.37
45 TestCertOptions 35.35
46 TestCertExpiration 229.01
48 TestForceSystemdFlag 44.82
49 TestForceSystemdEnv 44.5
50 TestDockerEnvContainerd 47.04
55 TestErrorSpam/setup 27.63
56 TestErrorSpam/start 0.73
57 TestErrorSpam/status 1
58 TestErrorSpam/pause 1.75
59 TestErrorSpam/unpause 1.85
60 TestErrorSpam/stop 1.49
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 78.62
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 6.14
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.08
72 TestFunctional/serial/CacheCmd/cache/add_local 1.3
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.14
77 TestFunctional/serial/CacheCmd/cache/delete 0.16
78 TestFunctional/serial/MinikubeKubectlCmd 0.16
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 40.45
81 TestFunctional/serial/ComponentHealth 0.09
82 TestFunctional/serial/LogsCmd 1.74
83 TestFunctional/serial/LogsFileCmd 1.73
84 TestFunctional/serial/InvalidService 4.05
86 TestFunctional/parallel/ConfigCmd 0.48
87 TestFunctional/parallel/DashboardCmd 8.42
88 TestFunctional/parallel/DryRun 0.61
89 TestFunctional/parallel/InternationalLanguage 0.18
90 TestFunctional/parallel/StatusCmd 1.08
94 TestFunctional/parallel/ServiceCmdConnect 10.67
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 26.6
98 TestFunctional/parallel/SSHCmd 0.7
99 TestFunctional/parallel/CpCmd 2.01
101 TestFunctional/parallel/FileSync 0.35
102 TestFunctional/parallel/CertSync 2.06
106 TestFunctional/parallel/NodeLabels 0.12
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
110 TestFunctional/parallel/License 0.25
111 TestFunctional/parallel/Version/short 0.1
112 TestFunctional/parallel/Version/components 1.3
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
117 TestFunctional/parallel/ImageCommands/ImageBuild 3.68
118 TestFunctional/parallel/ImageCommands/Setup 0.71
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.51
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.41
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
126 TestFunctional/parallel/ProfileCmd/profile_list 0.51
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
144 TestFunctional/parallel/ServiceCmd/List 0.52
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.61
147 TestFunctional/parallel/MountCmd/any-port 7.71
148 TestFunctional/parallel/ServiceCmd/Format 0.52
149 TestFunctional/parallel/ServiceCmd/URL 0.49
150 TestFunctional/parallel/MountCmd/specific-port 2.08
151 TestFunctional/parallel/MountCmd/VerifyCleanup 2.19
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 112.21
159 TestMultiControlPlane/serial/DeployApp 30.09
160 TestMultiControlPlane/serial/PingHostFromPods 1.62
161 TestMultiControlPlane/serial/AddWorkerNode 24.4
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
164 TestMultiControlPlane/serial/CopyFile 19.01
165 TestMultiControlPlane/serial/StopSecondaryNode 12.93
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
167 TestMultiControlPlane/serial/RestartSecondaryNode 31.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 143.31
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.53
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
172 TestMultiControlPlane/serial/StopCluster 35.98
173 TestMultiControlPlane/serial/RestartCluster 68.44
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
175 TestMultiControlPlane/serial/AddSecondaryNode 43.74
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.04
180 TestJSONOutput/start/Command 50.5
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.74
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.68
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.8
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.25
205 TestKicCustomNetwork/create_custom_network 37.83
206 TestKicCustomNetwork/use_default_bridge_network 34.44
207 TestKicExistingNetwork 31.52
208 TestKicCustomSubnet 33.99
209 TestKicStaticIP 31.14
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 62.53
214 TestMountStart/serial/StartWithMountFirst 5.94
215 TestMountStart/serial/VerifyMountFirst 0.26
216 TestMountStart/serial/StartWithMountSecond 6.21
217 TestMountStart/serial/VerifyMountSecond 0.29
218 TestMountStart/serial/DeleteFirst 1.61
219 TestMountStart/serial/VerifyMountPostDelete 0.26
220 TestMountStart/serial/Stop 1.19
221 TestMountStart/serial/RestartStopped 7.32
222 TestMountStart/serial/VerifyMountPostStop 0.25
225 TestMultiNode/serial/FreshStart2Nodes 68.34
226 TestMultiNode/serial/DeployApp2Nodes 17.25
227 TestMultiNode/serial/PingHostFrom2Pods 0.99
228 TestMultiNode/serial/AddNode 17.68
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.67
231 TestMultiNode/serial/CopyFile 10.02
232 TestMultiNode/serial/StopNode 2.3
233 TestMultiNode/serial/StartAfterStop 9.85
234 TestMultiNode/serial/RestartKeepsNodes 101.89
235 TestMultiNode/serial/DeleteNode 5.57
236 TestMultiNode/serial/StopMultiNode 24.03
237 TestMultiNode/serial/RestartMultiNode 53.38
238 TestMultiNode/serial/ValidateNameConflict 33.94
243 TestPreload 125.5
245 TestScheduledStopUnix 107.1
248 TestInsufficientStorage 10.58
249 TestRunningBinaryUpgrade 91.53
251 TestKubernetesUpgrade 354.93
252 TestMissingContainerUpgrade 172.29
254 TestPause/serial/Start 63.89
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
257 TestNoKubernetes/serial/StartWithK8s 42.34
258 TestNoKubernetes/serial/StartWithStopK8s 17.48
259 TestNoKubernetes/serial/Start 5.51
260 TestPause/serial/SecondStartNoReconfiguration 7.03
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
262 TestNoKubernetes/serial/ProfileList 1.37
263 TestNoKubernetes/serial/Stop 1.32
264 TestNoKubernetes/serial/StartNoArgs 7.27
265 TestPause/serial/Pause 0.76
266 TestPause/serial/VerifyStatus 0.31
267 TestPause/serial/Unpause 1.15
268 TestPause/serial/PauseAgain 1.17
269 TestPause/serial/DeletePaused 2.86
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
271 TestPause/serial/VerifyDeletedResources 0.24
272 TestStoppedBinaryUpgrade/Setup 0.91
273 TestStoppedBinaryUpgrade/Upgrade 106.41
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
289 TestNetworkPlugins/group/false 4.69
294 TestStartStop/group/old-k8s-version/serial/FirstStart 131.01
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.56
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
297 TestStartStop/group/old-k8s-version/serial/Stop 12.14
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
299 TestStartStop/group/old-k8s-version/serial/SecondStart 151.46
301 TestStartStop/group/no-preload/serial/FirstStart 64.58
302 TestStartStop/group/no-preload/serial/DeployApp 9.35
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
304 TestStartStop/group/no-preload/serial/Stop 12.22
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
306 TestStartStop/group/no-preload/serial/SecondStart 266.84
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
310 TestStartStop/group/old-k8s-version/serial/Pause 2.92
312 TestStartStop/group/embed-certs/serial/FirstStart 53.67
313 TestStartStop/group/embed-certs/serial/DeployApp 9.34
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
315 TestStartStop/group/embed-certs/serial/Stop 12.04
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
317 TestStartStop/group/embed-certs/serial/SecondStart 269.6
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
321 TestStartStop/group/no-preload/serial/Pause 3.26
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.54
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 278.04
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
332 TestStartStop/group/embed-certs/serial/Pause 3.11
334 TestStartStop/group/newest-cni/serial/FirstStart 34.49
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
337 TestStartStop/group/newest-cni/serial/Stop 1.26
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
339 TestStartStop/group/newest-cni/serial/SecondStart 15.95
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
343 TestStartStop/group/newest-cni/serial/Pause 3.24
344 TestNetworkPlugins/group/auto/Start 86.63
345 TestNetworkPlugins/group/auto/KubeletFlags 0.3
346 TestNetworkPlugins/group/auto/NetCatPod 10.32
347 TestNetworkPlugins/group/auto/DNS 0.18
348 TestNetworkPlugins/group/auto/Localhost 0.18
349 TestNetworkPlugins/group/auto/HairPin 0.15
350 TestNetworkPlugins/group/kindnet/Start 48.08
351 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
353 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
356 TestNetworkPlugins/group/kindnet/DNS 0.19
357 TestNetworkPlugins/group/kindnet/Localhost 0.15
358 TestNetworkPlugins/group/kindnet/HairPin 0.16
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.96
361 TestNetworkPlugins/group/calico/Start 81.33
362 TestNetworkPlugins/group/custom-flannel/Start 58.8
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
365 TestNetworkPlugins/group/calico/ControllerPod 5.04
366 TestNetworkPlugins/group/custom-flannel/DNS 0.22
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
369 TestNetworkPlugins/group/calico/KubeletFlags 0.5
370 TestNetworkPlugins/group/calico/NetCatPod 11.36
371 TestNetworkPlugins/group/calico/DNS 0.3
372 TestNetworkPlugins/group/calico/Localhost 0.21
373 TestNetworkPlugins/group/calico/HairPin 0.23
374 TestNetworkPlugins/group/enable-default-cni/Start 84.32
375 TestNetworkPlugins/group/flannel/Start 52.6
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
378 TestNetworkPlugins/group/flannel/NetCatPod 24.27
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 25.28
381 TestNetworkPlugins/group/flannel/DNS 0.21
382 TestNetworkPlugins/group/flannel/Localhost 0.16
383 TestNetworkPlugins/group/flannel/HairPin 0.22
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
387 TestNetworkPlugins/group/bridge/Start 73.74
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
389 TestNetworkPlugins/group/bridge/NetCatPod 11.26
390 TestNetworkPlugins/group/bridge/DNS 0.16
391 TestNetworkPlugins/group/bridge/Localhost 0.15
392 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (6.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-179078 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-179078 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.693730531s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 10:24:58.473299 2613053 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0923 10:24:58.473406 2613053 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-2607666/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-179078
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-179078: exit status 85 (71.614196ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-179078 | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC |          |
	|         | -p download-only-179078        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:24:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:24:51.825218 2613058 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:24:51.825350 2613058 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:24:51.825360 2613058 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:51.825366 2613058 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:24:51.825612 2613058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
	W0923 10:24:51.825748 2613058 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19689-2607666/.minikube/config/config.json: open /home/jenkins/minikube-integration/19689-2607666/.minikube/config/config.json: no such file or directory
	I0923 10:24:51.826206 2613058 out.go:352] Setting JSON to true
	I0923 10:24:51.827118 2613058 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":151639,"bootTime":1726935453,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 10:24:51.827192 2613058 start.go:139] virtualization:  
	I0923 10:24:51.830358 2613058 out.go:97] [download-only-179078] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0923 10:24:51.830547 2613058 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-2607666/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:24:51.830608 2613058 notify.go:220] Checking for updates...
	I0923 10:24:51.832502 2613058 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:24:51.834814 2613058 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:24:51.837122 2613058 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	I0923 10:24:51.838912 2613058 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	I0923 10:24:51.841043 2613058 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 10:24:51.844542 2613058 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:24:51.844785 2613058 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:24:51.872891 2613058 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:24:51.873022 2613058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:24:51.932099 2613058 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 10:24:51.922686173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:24:51.932215 2613058 docker.go:318] overlay module found
	I0923 10:24:51.934195 2613058 out.go:97] Using the docker driver based on user configuration
	I0923 10:24:51.934221 2613058 start.go:297] selected driver: docker
	I0923 10:24:51.934228 2613058 start.go:901] validating driver "docker" against <nil>
	I0923 10:24:51.934339 2613058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:24:51.985642 2613058 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 10:24:51.976438646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:24:51.985852 2613058 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:24:51.986198 2613058 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 10:24:51.986364 2613058 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:24:51.988899 2613058 out.go:169] Using Docker driver with root privileges
	I0923 10:24:51.990888 2613058 cni.go:84] Creating CNI manager for ""
	I0923 10:24:51.990950 2613058 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 10:24:51.990964 2613058 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:24:51.991057 2613058 start.go:340] cluster config:
	{Name:download-only-179078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-179078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:24:51.993037 2613058 out.go:97] Starting "download-only-179078" primary control-plane node in "download-only-179078" cluster
	I0923 10:24:51.993067 2613058 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 10:24:51.995054 2613058 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:24:51.995086 2613058 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0923 10:24:51.995182 2613058 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:24:52.019085 2613058 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:24:52.019880 2613058 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:24:52.020013 2613058 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:24:52.050580 2613058 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0923 10:24:52.050617 2613058 cache.go:56] Caching tarball of preloaded images
	I0923 10:24:52.050782 2613058 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0923 10:24:52.053305 2613058 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 10:24:52.053353 2613058 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0923 10:24:52.134355 2613058 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19689-2607666/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0923 10:24:56.097583 2613058 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	
	
	* The control-plane node download-only-179078 host does not exist
	  To start a cluster, run: "minikube start -p download-only-179078"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-179078
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-676157 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-676157 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.868531541s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 10:25:05.752536 2613053 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0923 10:25:05.752575 2613053 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-2607666/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-676157
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-676157: exit status 85 (66.489353ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-179078 | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC |                     |
	|         | -p download-only-179078        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC | 23 Sep 24 10:24 UTC |
	| delete  | -p download-only-179078        | download-only-179078 | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC | 23 Sep 24 10:24 UTC |
	| start   | -o=json --download-only        | download-only-676157 | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC |                     |
	|         | -p download-only-676157        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:24:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:24:58.931632 2613261 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:24:58.931859 2613261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:24:58.931888 2613261 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:58.931906 2613261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:24:58.932190 2613261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
	I0923 10:24:58.932660 2613261 out.go:352] Setting JSON to true
	I0923 10:24:58.933656 2613261 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":151646,"bootTime":1726935453,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 10:24:58.933756 2613261 start.go:139] virtualization:  
	I0923 10:24:58.936513 2613261 out.go:97] [download-only-676157] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 10:24:58.936813 2613261 notify.go:220] Checking for updates...
	I0923 10:24:58.939319 2613261 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:24:58.940995 2613261 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:24:58.943173 2613261 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	I0923 10:24:58.945058 2613261 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	I0923 10:24:58.946809 2613261 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 10:24:58.951262 2613261 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:24:58.951583 2613261 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:24:58.972583 2613261 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:24:58.972690 2613261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:24:59.029796 2613261 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:24:59.020018655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:24:59.029919 2613261 docker.go:318] overlay module found
	I0923 10:24:59.032406 2613261 out.go:97] Using the docker driver based on user configuration
	I0923 10:24:59.032461 2613261 start.go:297] selected driver: docker
	I0923 10:24:59.032469 2613261 start.go:901] validating driver "docker" against <nil>
	I0923 10:24:59.032591 2613261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:24:59.092011 2613261 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:24:59.082405027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:24:59.092211 2613261 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:24:59.092493 2613261 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 10:24:59.092648 2613261 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:24:59.094861 2613261 out.go:169] Using Docker driver with root privileges
	I0923 10:24:59.096515 2613261 cni.go:84] Creating CNI manager for ""
	I0923 10:24:59.096576 2613261 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 10:24:59.096595 2613261 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:24:59.096689 2613261 start.go:340] cluster config:
	{Name:download-only-676157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-676157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:24:59.098496 2613261 out.go:97] Starting "download-only-676157" primary control-plane node in "download-only-676157" cluster
	I0923 10:24:59.098516 2613261 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 10:24:59.100111 2613261 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:24:59.100136 2613261 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 10:24:59.100298 2613261 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:24:59.114417 2613261 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:24:59.114547 2613261 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:24:59.114573 2613261 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 10:24:59.114581 2613261 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 10:24:59.114589 2613261 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 10:24:59.152503 2613261 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 10:24:59.152572 2613261 cache.go:56] Caching tarball of preloaded images
	I0923 10:24:59.153865 2613261 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 10:24:59.156024 2613261 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 10:24:59.156055 2613261 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0923 10:24:59.235269 2613261 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19689-2607666/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-676157 host does not exist
	  To start a cluster, run: "minikube start -p download-only-676157"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-676157
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 10:25:06.946964 2613053 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-598077 --alsologtostderr --binary-mirror http://127.0.0.1:45953 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-598077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-598077
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-895903
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-895903: exit status 85 (63.579507ms)

                                                
                                                
-- stdout --
	* Profile "addons-895903" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-895903"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-895903
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-895903: exit status 85 (83.28759ms)

                                                
                                                
-- stdout --
	* Profile "addons-895903" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-895903"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (260.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-895903 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-895903 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (4m20.769881461s)
--- PASS: TestAddons/Setup (260.77s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-895903 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-895903 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.094892ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jwrzn" [b939f687-74b6-4a54-9a56-07aa57ae0752] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004174175s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-skfcc" [7874c975-6ab8-4813-bd94-94c3ecf85327] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004211745s
addons_test.go:338: (dbg) Run:  kubectl --context addons-895903 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-895903 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-895903 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.196924588s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 ip
2024/09/23 10:33:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.16s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-895903 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-895903 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-895903 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [55532c02-897d-46b1-8e9b-e2fdc98896fe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [55532c02-897d-46b1-8e9b-e2fdc98896fe] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004188594s
I0923 10:34:16.084527 2613053 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-895903 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-895903 addons disable ingress-dns --alsologtostderr -v=1: (1.940781341s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-895903 addons disable ingress --alsologtostderr -v=1: (7.908060952s)
--- PASS: TestAddons/parallel/Ingress (20.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wbbdg" [da6e2bf1-1c63-4e4f-93e1-244f156ae28b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012746817s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-895903
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-895903: (6.100058907s)
--- PASS: TestAddons/parallel/InspektorGadget (11.11s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.07s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.19325ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-jw4gj" [ce212bd3-d200-4f28-ad95-8a82fcaa0703] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007720348s
addons_test.go:413: (dbg) Run:  kubectl --context addons-895903 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 10:33:32.456741 2613053 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 10:33:32.462376 2613053 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 10:33:32.462404 2613053 kapi.go:107] duration metric: took 7.403701ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.413384ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-895903 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-895903 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [53ef4dbf-2396-4c39-8038-42a4efc92cdb] Pending
helpers_test.go:344: "task-pv-pod" [53ef4dbf-2396-4c39-8038-42a4efc92cdb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [53ef4dbf-2396-4c39-8038-42a4efc92cdb] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003529287s
addons_test.go:528: (dbg) Run:  kubectl --context addons-895903 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-895903 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-895903 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-895903 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-895903 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-895903 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b5d3cc23-61fc-45e0-9b59-9b35f4a58c29] Pending
helpers_test.go:344: "task-pv-pod-restore" [b5d3cc23-61fc-45e0-9b59-9b35f4a58c29] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b5d3cc23-61fc-45e0-9b59-9b35f4a58c29] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005157025s
addons_test.go:570: (dbg) Run:  kubectl --context addons-895903 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-895903 delete pod task-pv-pod-restore: (1.106941685s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-895903 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-895903 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-895903 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.761914986s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-895903 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-895903 --alsologtostderr -v=1: (1.478527366s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-wrd2r" [f95252ec-610e-40f4-82c7-861d8d6e1dfd] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-wrd2r" [f95252ec-610e-40f4-82c7-861d8d6e1dfd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-wrd2r" [f95252ec-610e-40f4-82c7-861d8d6e1dfd] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004387049s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-895903 addons disable headlamp --alsologtostderr -v=1: (5.853253679s)
--- PASS: TestAddons/parallel/Headlamp (16.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-gmnrx" [d2959eb6-c058-4e0e-9fb6-48cb9827f0b5] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003484531s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-895903
--- PASS: TestAddons/parallel/CloudSpanner (6.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.84s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-895903 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-895903 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-895903 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3168d908-6525-4947-85ec-4f213737a327] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3168d908-6525-4947-85ec-4f213737a327] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3168d908-6525-4947-85ec-4f213737a327] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004119326s
addons_test.go:938: (dbg) Run:  kubectl --context addons-895903 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 ssh "cat /opt/local-path-provisioner/pvc-e24cd200-a07e-47de-add6-77416b59fb31_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-895903 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-895903 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.84s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-r7wk4" [139101f4-490e-4130-90f0-4341fcfd1afb] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003956905s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-895903
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-5wqrb" [2bdc45e2-2f5a-45c8-b164-8e7d3b5846b5] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003517587s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-895903 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-895903 addons disable yakd --alsologtostderr -v=1: (5.939266328s)
--- PASS: TestAddons/parallel/Yakd (11.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-895903
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-895903: (12.101474081s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-895903
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-895903
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-895903
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (35.35s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-703858 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-703858 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.744101037s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-703858 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-703858 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-703858 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-703858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-703858
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-703858: (1.957298121s)
--- PASS: TestCertOptions (35.35s)

                                                
                                    
x
+
TestCertExpiration (229.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-743198 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-743198 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.533295314s)
E0923 11:12:31.475901 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-743198 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-743198 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.062409538s)
helpers_test.go:175: Cleaning up "cert-expiration-743198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-743198
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-743198: (2.413347257s)
--- PASS: TestCertExpiration (229.01s)

                                                
                                    
x
+
TestForceSystemdFlag (44.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-950598 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-950598 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.773502706s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-950598 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-950598" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-950598
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-950598: (2.449099761s)
--- PASS: TestForceSystemdFlag (44.82s)

                                                
                                    
x
+
TestForceSystemdEnv (44.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-862718 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-862718 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.93941737s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-862718 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-862718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-862718
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-862718: (2.150056493s)
--- PASS: TestForceSystemdEnv (44.50s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.04s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-680549 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-680549 --driver=docker  --container-runtime=containerd: (31.295755806s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-680549"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-680549": (1.018492622s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-pVTMVen8lP8J/agent.2632831" SSH_AGENT_PID="2632832" DOCKER_HOST=ssh://docker@127.0.0.1:41426 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-pVTMVen8lP8J/agent.2632831" SSH_AGENT_PID="2632832" DOCKER_HOST=ssh://docker@127.0.0.1:41426 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-pVTMVen8lP8J/agent.2632831" SSH_AGENT_PID="2632832" DOCKER_HOST=ssh://docker@127.0.0.1:41426 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.367638759s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-pVTMVen8lP8J/agent.2632831" SSH_AGENT_PID="2632832" DOCKER_HOST=ssh://docker@127.0.0.1:41426 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-680549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-680549
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-680549: (1.952930914s)
--- PASS: TestDockerEnvContainerd (47.04s)

                                                
                                    
x
+
TestErrorSpam/setup (27.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-183894 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-183894 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-183894 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-183894 --driver=docker  --container-runtime=containerd: (27.626577037s)
--- PASS: TestErrorSpam/setup (27.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 stop: (1.31327992s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-183894 --log_dir /tmp/nospam-183894 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19689-2607666/.minikube/files/etc/test/nested/copy/2613053/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.62s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-238803 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-238803 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m18.614159451s)
--- PASS: TestFunctional/serial/StartWithProxy (78.62s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 10:37:30.990582 2613053 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-238803 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-238803 --alsologtostderr -v=8: (6.139336s)
functional_test.go:663: soft start took 6.141087747s for "functional-238803" cluster.
I0923 10:37:37.130248 2613053 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-238803 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 cache add registry.k8s.io/pause:3.1: (1.460688748s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 cache add registry.k8s.io/pause:3.3: (1.380699145s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 cache add registry.k8s.io/pause:latest: (1.236274855s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-238803 /tmp/TestFunctionalserialCacheCmdcacheadd_local2431016656/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 cache add minikube-local-cache-test:functional-238803
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 cache delete minikube-local-cache-test:functional-238803
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-238803
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-238803 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.848669ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 cache reload: (1.14273711s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 kubectl -- --context functional-238803 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-238803 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-238803 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-238803 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.445553213s)
functional_test.go:761: restart took 40.445657221s for "functional-238803" cluster.
I0923 10:38:26.128449 2613053 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (40.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-238803 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 logs: (1.738471346s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 logs --file /tmp/TestFunctionalserialLogsFileCmd2673231827/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 logs --file /tmp/TestFunctionalserialLogsFileCmd2673231827/001/logs.txt: (1.728022439s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-238803 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-238803
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-238803: exit status 115 (677.153204ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31900 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-238803 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-238803 config get cpus: exit status 14 (77.520435ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-238803 config get cpus: exit status 14 (77.213146ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-238803 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-238803 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2648551: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-238803 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-238803 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (245.604746ms)

                                                
                                                
-- stdout --
	* [functional-238803] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:39:09.731645 2647399 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:39:09.731819 2647399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:39:09.731830 2647399 out.go:358] Setting ErrFile to fd 2...
	I0923 10:39:09.731835 2647399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:39:09.732082 2647399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
	I0923 10:39:09.732923 2647399 out.go:352] Setting JSON to false
	I0923 10:39:09.733944 2647399 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":152497,"bootTime":1726935453,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 10:39:09.734020 2647399 start.go:139] virtualization:  
	I0923 10:39:09.737891 2647399 out.go:177] * [functional-238803] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 10:39:09.739851 2647399 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:39:09.739927 2647399 notify.go:220] Checking for updates...
	I0923 10:39:09.744547 2647399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:39:09.746371 2647399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	I0923 10:39:09.748017 2647399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	I0923 10:39:09.749691 2647399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 10:39:09.751166 2647399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:39:09.753193 2647399 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 10:39:09.754281 2647399 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:39:09.775519 2647399 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:39:09.775642 2647399 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:39:09.884916 2647399 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 10:39:09.865827304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:39:09.885020 2647399 docker.go:318] overlay module found
	I0923 10:39:09.887345 2647399 out.go:177] * Using the docker driver based on existing profile
	I0923 10:39:09.888984 2647399 start.go:297] selected driver: docker
	I0923 10:39:09.889004 2647399 start.go:901] validating driver "docker" against &{Name:functional-238803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-238803 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:39:09.889121 2647399 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:39:09.891345 2647399 out.go:201] 
	W0923 10:39:09.892973 2647399 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 10:39:09.895098 2647399 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-238803 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-238803 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-238803 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (184.470965ms)

                                                
                                                
-- stdout --
	* [functional-238803] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:39:12.829777 2648325 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:39:12.829985 2648325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:39:12.830017 2648325 out.go:358] Setting ErrFile to fd 2...
	I0923 10:39:12.830045 2648325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:39:12.830970 2648325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
	I0923 10:39:12.831429 2648325 out.go:352] Setting JSON to false
	I0923 10:39:12.832469 2648325 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":152500,"bootTime":1726935453,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 10:39:12.832576 2648325 start.go:139] virtualization:  
	I0923 10:39:12.836308 2648325 out.go:177] * [functional-238803] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0923 10:39:12.838228 2648325 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:39:12.838280 2648325 notify.go:220] Checking for updates...
	I0923 10:39:12.842136 2648325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:39:12.843992 2648325 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	I0923 10:39:12.845886 2648325 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	I0923 10:39:12.848689 2648325 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 10:39:12.850552 2648325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:39:12.853020 2648325 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 10:39:12.853661 2648325 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:39:12.880821 2648325 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:39:12.880953 2648325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:39:12.939929 2648325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 10:39:12.929546184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:39:12.940037 2648325 docker.go:318] overlay module found
	I0923 10:39:12.943154 2648325 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 10:39:12.945214 2648325 start.go:297] selected driver: docker
	I0923 10:39:12.945240 2648325 start.go:901] validating driver "docker" against &{Name:functional-238803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-238803 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:39:12.945364 2648325 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:39:12.948054 2648325 out.go:201] 
	W0923 10:39:12.949775 2648325 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 10:39:12.951528 2648325 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-238803 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-238803 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-jc4cj" [3624eebe-e25f-4f58-9815-e1e5153924df] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-jc4cj" [3624eebe-e25f-4f58-9815-e1e5153924df] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004541598s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31761
functional_test.go:1675: http://192.168.49.2:31761: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-jc4cj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31761
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5c643720-4ae2-4f88-b132-39de3717c0c2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004306113s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-238803 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-238803 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-238803 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-238803 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [14e19950-4d34-4f5a-bcc3-ebf91b6f0f54] Pending
helpers_test.go:344: "sp-pod" [14e19950-4d34-4f5a-bcc3-ebf91b6f0f54] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [14e19950-4d34-4f5a-bcc3-ebf91b6f0f54] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003224937s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-238803 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-238803 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-238803 delete -f testdata/storage-provisioner/pod.yaml: (1.482482737s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-238803 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4a6e7686-5fa1-4d7e-a872-37c11b396749] Pending
helpers_test.go:344: "sp-pod" [4a6e7686-5fa1-4d7e-a872-37c11b396749] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004200154s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-238803 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh -n functional-238803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 cp functional-238803:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2769497687/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh -n functional-238803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh -n functional-238803 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2613053/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo cat /etc/test/nested/copy/2613053/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2613053.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo cat /etc/ssl/certs/2613053.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2613053.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo cat /usr/share/ca-certificates/2613053.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/26130532.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo cat /etc/ssl/certs/26130532.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/26130532.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo cat /usr/share/ca-certificates/26130532.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-238803 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-238803 ssh "sudo systemctl is-active docker": exit status 1 (367.19477ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-238803 ssh "sudo systemctl is-active crio": exit status 1 (357.16973ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 version -o=json --components: (1.300598642s)
--- PASS: TestFunctional/parallel/Version/components (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-238803 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-238803
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-238803
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-238803 image ls --format short --alsologtostderr:
I0923 10:39:22.838192 2650003 out.go:345] Setting OutFile to fd 1 ...
I0923 10:39:22.838402 2650003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:39:22.838414 2650003 out.go:358] Setting ErrFile to fd 2...
I0923 10:39:22.838420 2650003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:39:22.838716 2650003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
I0923 10:39:22.839576 2650003 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 10:39:22.839780 2650003 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 10:39:22.840377 2650003 cli_runner.go:164] Run: docker container inspect functional-238803 --format={{.State.Status}}
I0923 10:39:22.860627 2650003 ssh_runner.go:195] Run: systemctl --version
I0923 10:39:22.860687 2650003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-238803
I0923 10:39:22.890030 2650003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41436 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/functional-238803/id_rsa Username:docker}
I0923 10:39:22.983863 2650003 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-238803 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| docker.io/kicbase/echo-server               | functional-238803  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/library/minikube-local-cache-test | functional-238803  | sha256:3e0f6f | 992B   |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-238803 image ls --format table --alsologtostderr:
I0923 10:39:23.464588 2650157 out.go:345] Setting OutFile to fd 1 ...
I0923 10:39:23.464992 2650157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:39:23.465039 2650157 out.go:358] Setting ErrFile to fd 2...
I0923 10:39:23.465075 2650157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:39:23.465458 2650157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
I0923 10:39:23.466433 2650157 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 10:39:23.466629 2650157 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 10:39:23.467167 2650157 cli_runner.go:164] Run: docker container inspect functional-238803 --format={{.State.Status}}
I0923 10:39:23.496939 2650157 ssh_runner.go:195] Run: systemctl --version
I0923 10:39:23.496990 2650157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-238803
I0923 10:39:23.522422 2650157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41436 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/functional-238803/id_rsa Username:docker}
I0923 10:39:23.619591 2650157 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-238803 image ls --format json --alsologtostderr:
[{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sh
a256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/stor
age-provisioner:v5"],"size":"8034419"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:3e0f6f6af98d9590219a1941aa459ca08e9c3eb577803fe554c53d305b4c2136","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-238803"],"size":"992"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/
coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-238803"],"size":"2173567"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.i
o/library/nginx:latest"],"size":"67695038"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-238803 image ls --format json --alsologtostderr:
I0923 10:39:23.167635 2650071 out.go:345] Setting OutFile to fd 1 ...
I0923 10:39:23.167820 2650071 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:39:23.167852 2650071 out.go:358] Setting ErrFile to fd 2...
I0923 10:39:23.167874 2650071 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:39:23.168144 2650071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
I0923 10:39:23.168801 2650071 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 10:39:23.168964 2650071 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 10:39:23.169493 2650071 cli_runner.go:164] Run: docker container inspect functional-238803 --format={{.State.Status}}
I0923 10:39:23.213743 2650071 ssh_runner.go:195] Run: systemctl --version
I0923 10:39:23.213808 2650071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-238803
I0923 10:39:23.244792 2650071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41436 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/functional-238803/id_rsa Username:docker}
I0923 10:39:23.342313 2650071 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-238803 image ls --format yaml --alsologtostderr:
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:3e0f6f6af98d9590219a1941aa459ca08e9c3eb577803fe554c53d305b4c2136
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-238803
size: "992"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-238803
size: "2173567"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-238803 image ls --format yaml --alsologtostderr:
I0923 10:39:22.850434 2650008 out.go:345] Setting OutFile to fd 1 ...
I0923 10:39:22.850701 2650008 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:39:22.850737 2650008 out.go:358] Setting ErrFile to fd 2...
I0923 10:39:22.850756 2650008 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:39:22.851048 2650008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
I0923 10:39:22.851876 2650008 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 10:39:22.852038 2650008 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 10:39:22.852623 2650008 cli_runner.go:164] Run: docker container inspect functional-238803 --format={{.State.Status}}
I0923 10:39:22.883433 2650008 ssh_runner.go:195] Run: systemctl --version
I0923 10:39:22.883493 2650008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-238803
I0923 10:39:22.909155 2650008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41436 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/functional-238803/id_rsa Username:docker}
I0923 10:39:23.010674 2650008 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-238803 ssh pgrep buildkitd: exit status 1 (355.418189ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image build -t localhost/my-image:functional-238803 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 image build -t localhost/my-image:functional-238803 testdata/build --alsologtostderr: (3.088588824s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-238803 image build -t localhost/my-image:functional-238803 testdata/build --alsologtostderr:
I0923 10:39:23.480747 2650161 out.go:345] Setting OutFile to fd 1 ...
I0923 10:39:23.481864 2650161 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:39:23.481885 2650161 out.go:358] Setting ErrFile to fd 2...
I0923 10:39:23.481892 2650161 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:39:23.482183 2650161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
I0923 10:39:23.482847 2650161 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 10:39:23.484493 2650161 config.go:182] Loaded profile config "functional-238803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 10:39:23.485068 2650161 cli_runner.go:164] Run: docker container inspect functional-238803 --format={{.State.Status}}
I0923 10:39:23.510397 2650161 ssh_runner.go:195] Run: systemctl --version
I0923 10:39:23.510502 2650161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-238803
I0923 10:39:23.542394 2650161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41436 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/functional-238803/id_rsa Username:docker}
I0923 10:39:23.644269 2650161 build_images.go:161] Building image from path: /tmp/build.4031472948.tar
I0923 10:39:23.644338 2650161 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 10:39:23.658516 2650161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4031472948.tar
I0923 10:39:23.663102 2650161 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4031472948.tar: stat -c "%s %y" /var/lib/minikube/build/build.4031472948.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4031472948.tar': No such file or directory
I0923 10:39:23.663129 2650161 ssh_runner.go:362] scp /tmp/build.4031472948.tar --> /var/lib/minikube/build/build.4031472948.tar (3072 bytes)
I0923 10:39:23.696195 2650161 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4031472948
I0923 10:39:23.705289 2650161 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4031472948 -xf /var/lib/minikube/build/build.4031472948.tar
I0923 10:39:23.714611 2650161 containerd.go:394] Building image: /var/lib/minikube/build/build.4031472948
I0923 10:39:23.714697 2650161 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4031472948 --local dockerfile=/var/lib/minikube/build/build.4031472948 --output type=image,name=localhost/my-image:functional-238803
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:ff511e4cf041843e2e3783cd08ff42d076d457af261538d639d12ecb965bed39
#8 exporting manifest sha256:ff511e4cf041843e2e3783cd08ff42d076d457af261538d639d12ecb965bed39 0.0s done
#8 exporting config sha256:89c5d039b607e809e38697409f6c2e7c1c61f5f249b8129e6eae5fe7fc8348ba done
#8 naming to localhost/my-image:functional-238803 done
#8 DONE 0.1s
I0923 10:39:26.457647 2650161 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4031472948 --local dockerfile=/var/lib/minikube/build/build.4031472948 --output type=image,name=localhost/my-image:functional-238803: (2.742917375s)
I0923 10:39:26.457713 2650161 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4031472948
I0923 10:39:26.468285 2650161 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4031472948.tar
I0923 10:39:26.478658 2650161 build_images.go:217] Built localhost/my-image:functional-238803 from /tmp/build.4031472948.tar
I0923 10:39:26.478687 2650161 build_images.go:133] succeeded building to: functional-238803
I0923 10:39:26.478692 2650161 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-238803
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image load --daemon kicbase/echo-server:functional-238803 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 image load --daemon kicbase/echo-server:functional-238803 --alsologtostderr: (1.239360499s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image load --daemon kicbase/echo-server:functional-238803 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 image load --daemon kicbase/echo-server:functional-238803 --alsologtostderr: (1.151143428s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-238803
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image load --daemon kicbase/echo-server:functional-238803 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-238803 image load --daemon kicbase/echo-server:functional-238803 --alsologtostderr: (1.166627252s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "434.056896ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "75.663811ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "415.54755ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "72.153525ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-238803 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-238803 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-238803 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-238803 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2645901: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image save kicbase/echo-server:functional-238803 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image rm kicbase/echo-server:functional-238803 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-238803 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-238803 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [52642b54-f39f-4ba1-8bfe-35313ea09daf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [52642b54-f39f-4ba1-8bfe-35313ea09daf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004592774s
I0923 10:38:50.619439 2613053 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-238803
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 image save --daemon kicbase/echo-server:functional-238803 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-238803
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-238803 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.52.52 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-238803 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-238803 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-238803 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-t9l5m" [c6a9594a-b890-4f0b-b440-33610088c90b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-t9l5m" [c6a9594a-b890-4f0b-b440-33610088c90b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004564903s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 service list -o json
functional_test.go:1494: Took "591.705288ms" to run "out/minikube-linux-arm64 -p functional-238803 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31965
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdany-port1632022016/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727087950271552677" to /tmp/TestFunctionalparallelMountCmdany-port1632022016/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727087950271552677" to /tmp/TestFunctionalparallelMountCmdany-port1632022016/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727087950271552677" to /tmp/TestFunctionalparallelMountCmdany-port1632022016/001/test-1727087950271552677
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 10:39 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 10:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 10:39 test-1727087950271552677
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh cat /mount-9p/test-1727087950271552677
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-238803 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5e5433b4-6f3e-4908-a654-75e79689a5f2] Pending
helpers_test.go:344: "busybox-mount" [5e5433b4-6f3e-4908-a654-75e79689a5f2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5e5433b4-6f3e-4908-a654-75e79689a5f2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5e5433b4-6f3e-4908-a654-75e79689a5f2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003173076s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-238803 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdany-port1632022016/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31965
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdspecific-port675376307/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-238803 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (489.698564ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:39:18.460393 2613053 retry.go:31] will retry after 424.663318ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdspecific-port675376307/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-238803 ssh "sudo umount -f /mount-9p": exit status 1 (342.474096ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-238803 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdspecific-port675376307/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3891821768/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3891821768/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3891821768/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-238803 ssh "findmnt -T" /mount1: exit status 1 (860.794628ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:39:20.912028 2613053 retry.go:31] will retry after 290.318243ms: exit status 1
2024/09/23 10:39:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-238803 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-238803 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3891821768/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3891821768/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-238803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3891821768/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-238803
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-238803
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-238803
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (112.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-826887 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0923 10:39:29.697531 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:30.978941 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:33.541231 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:38.663189 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:48.904975 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:09.386336 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:50.348202 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-826887 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m51.40829575s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (112.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-826887 -- rollout status deployment/busybox: (27.073142662s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-bx9x8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-kpxt4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-rwkgw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-bx9x8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-kpxt4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-rwkgw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-bx9x8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-kpxt4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-rwkgw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-bx9x8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-bx9x8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-kpxt4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-kpxt4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-rwkgw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-826887 -- exec busybox-7dff88458-rwkgw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-826887 -v=7 --alsologtostderr
E0923 10:42:12.270019 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-826887 -v=7 --alsologtostderr: (23.445236156s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-826887 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.005250746s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-826887 status --output json -v=7 --alsologtostderr: (1.061661026s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp testdata/cp-test.txt ha-826887:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4217944766/001/cp-test_ha-826887.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887:/home/docker/cp-test.txt ha-826887-m02:/home/docker/cp-test_ha-826887_ha-826887-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m02 "sudo cat /home/docker/cp-test_ha-826887_ha-826887-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887:/home/docker/cp-test.txt ha-826887-m03:/home/docker/cp-test_ha-826887_ha-826887-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m03 "sudo cat /home/docker/cp-test_ha-826887_ha-826887-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887:/home/docker/cp-test.txt ha-826887-m04:/home/docker/cp-test_ha-826887_ha-826887-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m04 "sudo cat /home/docker/cp-test_ha-826887_ha-826887-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp testdata/cp-test.txt ha-826887-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4217944766/001/cp-test_ha-826887-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m02:/home/docker/cp-test.txt ha-826887:/home/docker/cp-test_ha-826887-m02_ha-826887.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887 "sudo cat /home/docker/cp-test_ha-826887-m02_ha-826887.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m02:/home/docker/cp-test.txt ha-826887-m03:/home/docker/cp-test_ha-826887-m02_ha-826887-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m03 "sudo cat /home/docker/cp-test_ha-826887-m02_ha-826887-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m02:/home/docker/cp-test.txt ha-826887-m04:/home/docker/cp-test_ha-826887-m02_ha-826887-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m04 "sudo cat /home/docker/cp-test_ha-826887-m02_ha-826887-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp testdata/cp-test.txt ha-826887-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4217944766/001/cp-test_ha-826887-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m03:/home/docker/cp-test.txt ha-826887:/home/docker/cp-test_ha-826887-m03_ha-826887.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887 "sudo cat /home/docker/cp-test_ha-826887-m03_ha-826887.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m03:/home/docker/cp-test.txt ha-826887-m02:/home/docker/cp-test_ha-826887-m03_ha-826887-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m02 "sudo cat /home/docker/cp-test_ha-826887-m03_ha-826887-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m03:/home/docker/cp-test.txt ha-826887-m04:/home/docker/cp-test_ha-826887-m03_ha-826887-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m04 "sudo cat /home/docker/cp-test_ha-826887-m03_ha-826887-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp testdata/cp-test.txt ha-826887-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4217944766/001/cp-test_ha-826887-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m04:/home/docker/cp-test.txt ha-826887:/home/docker/cp-test_ha-826887-m04_ha-826887.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887 "sudo cat /home/docker/cp-test_ha-826887-m04_ha-826887.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m04:/home/docker/cp-test.txt ha-826887-m02:/home/docker/cp-test_ha-826887-m04_ha-826887-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m02 "sudo cat /home/docker/cp-test_ha-826887-m04_ha-826887-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 cp ha-826887-m04:/home/docker/cp-test.txt ha-826887-m03:/home/docker/cp-test_ha-826887-m04_ha-826887-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 ssh -n ha-826887-m03 "sudo cat /home/docker/cp-test_ha-826887-m04_ha-826887-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-826887 node stop m02 -v=7 --alsologtostderr: (12.185170659s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr: exit status 7 (742.898779ms)

                                                
                                                
-- stdout --
	ha-826887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-826887-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-826887-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-826887-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:42:50.157217 2666299 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:42:50.157469 2666299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:42:50.157502 2666299 out.go:358] Setting ErrFile to fd 2...
	I0923 10:42:50.157523 2666299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:42:50.157841 2666299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
	I0923 10:42:50.158162 2666299 out.go:352] Setting JSON to false
	I0923 10:42:50.158233 2666299 mustload.go:65] Loading cluster: ha-826887
	I0923 10:42:50.158342 2666299 notify.go:220] Checking for updates...
	I0923 10:42:50.158786 2666299 config.go:182] Loaded profile config "ha-826887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 10:42:50.158832 2666299 status.go:174] checking status of ha-826887 ...
	I0923 10:42:50.159863 2666299 cli_runner.go:164] Run: docker container inspect ha-826887 --format={{.State.Status}}
	I0923 10:42:50.184172 2666299 status.go:364] ha-826887 host status = "Running" (err=<nil>)
	I0923 10:42:50.184197 2666299 host.go:66] Checking if "ha-826887" exists ...
	I0923 10:42:50.184521 2666299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-826887
	I0923 10:42:50.212940 2666299 host.go:66] Checking if "ha-826887" exists ...
	I0923 10:42:50.213295 2666299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:42:50.213373 2666299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-826887
	I0923 10:42:50.230862 2666299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41441 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/ha-826887/id_rsa Username:docker}
	I0923 10:42:50.325701 2666299 ssh_runner.go:195] Run: systemctl --version
	I0923 10:42:50.330308 2666299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:42:50.342779 2666299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:42:50.398775 2666299 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-23 10:42:50.38630979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:42:50.399534 2666299 kubeconfig.go:125] found "ha-826887" server: "https://192.168.49.254:8443"
	I0923 10:42:50.399591 2666299 api_server.go:166] Checking apiserver status ...
	I0923 10:42:50.399678 2666299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:42:50.410892 2666299 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	I0923 10:42:50.420789 2666299 api_server.go:182] apiserver freezer: "4:freezer:/docker/271d7d590a9ee9c7e44f4ec073498df6e97534af95367663bb642c02647becb8/kubepods/burstable/pod85df1ed92d65492b841a8ed119cfbba0/67fd37a1cbb16133cbb1ca8ca127b38a1ad8a9c668108b38f3c866dab1b83974"
	I0923 10:42:50.420881 2666299 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/271d7d590a9ee9c7e44f4ec073498df6e97534af95367663bb642c02647becb8/kubepods/burstable/pod85df1ed92d65492b841a8ed119cfbba0/67fd37a1cbb16133cbb1ca8ca127b38a1ad8a9c668108b38f3c866dab1b83974/freezer.state
	I0923 10:42:50.432361 2666299 api_server.go:204] freezer state: "THAWED"
	I0923 10:42:50.432391 2666299 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 10:42:50.440436 2666299 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 10:42:50.440467 2666299 status.go:456] ha-826887 apiserver status = Running (err=<nil>)
	I0923 10:42:50.440478 2666299 status.go:176] ha-826887 status: &{Name:ha-826887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:42:50.440494 2666299 status.go:174] checking status of ha-826887-m02 ...
	I0923 10:42:50.440811 2666299 cli_runner.go:164] Run: docker container inspect ha-826887-m02 --format={{.State.Status}}
	I0923 10:42:50.458309 2666299 status.go:364] ha-826887-m02 host status = "Stopped" (err=<nil>)
	I0923 10:42:50.458331 2666299 status.go:377] host is not running, skipping remaining checks
	I0923 10:42:50.458338 2666299 status.go:176] ha-826887-m02 status: &{Name:ha-826887-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:42:50.458359 2666299 status.go:174] checking status of ha-826887-m03 ...
	I0923 10:42:50.458697 2666299 cli_runner.go:164] Run: docker container inspect ha-826887-m03 --format={{.State.Status}}
	I0923 10:42:50.484051 2666299 status.go:364] ha-826887-m03 host status = "Running" (err=<nil>)
	I0923 10:42:50.484078 2666299 host.go:66] Checking if "ha-826887-m03" exists ...
	I0923 10:42:50.484387 2666299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-826887-m03
	I0923 10:42:50.503978 2666299 host.go:66] Checking if "ha-826887-m03" exists ...
	I0923 10:42:50.504299 2666299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:42:50.504350 2666299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-826887-m03
	I0923 10:42:50.523023 2666299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41451 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/ha-826887-m03/id_rsa Username:docker}
	I0923 10:42:50.617169 2666299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:42:50.631543 2666299 kubeconfig.go:125] found "ha-826887" server: "https://192.168.49.254:8443"
	I0923 10:42:50.631574 2666299 api_server.go:166] Checking apiserver status ...
	I0923 10:42:50.631649 2666299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:42:50.643107 2666299 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	I0923 10:42:50.656113 2666299 api_server.go:182] apiserver freezer: "4:freezer:/docker/d946a55c1a8f0afe98f5d4370aae4f054d0c02cf03e68d8878296bcf0786d1d9/kubepods/burstable/poddaaa4e0cef19861488cf85872a24fc1f/d241088ba3b312745b43dd17ef961b550936987ffa98af4128a55ef8ceb11d41"
	I0923 10:42:50.656203 2666299 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d946a55c1a8f0afe98f5d4370aae4f054d0c02cf03e68d8878296bcf0786d1d9/kubepods/burstable/poddaaa4e0cef19861488cf85872a24fc1f/d241088ba3b312745b43dd17ef961b550936987ffa98af4128a55ef8ceb11d41/freezer.state
	I0923 10:42:50.668590 2666299 api_server.go:204] freezer state: "THAWED"
	I0923 10:42:50.668708 2666299 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 10:42:50.677299 2666299 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 10:42:50.677330 2666299 status.go:456] ha-826887-m03 apiserver status = Running (err=<nil>)
	I0923 10:42:50.677340 2666299 status.go:176] ha-826887-m03 status: &{Name:ha-826887-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:42:50.677356 2666299 status.go:174] checking status of ha-826887-m04 ...
	I0923 10:42:50.677655 2666299 cli_runner.go:164] Run: docker container inspect ha-826887-m04 --format={{.State.Status}}
	I0923 10:42:50.697012 2666299 status.go:364] ha-826887-m04 host status = "Running" (err=<nil>)
	I0923 10:42:50.697038 2666299 host.go:66] Checking if "ha-826887-m04" exists ...
	I0923 10:42:50.697350 2666299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-826887-m04
	I0923 10:42:50.713776 2666299 host.go:66] Checking if "ha-826887-m04" exists ...
	I0923 10:42:50.714969 2666299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:42:50.715049 2666299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-826887-m04
	I0923 10:42:50.731550 2666299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41456 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/ha-826887-m04/id_rsa Username:docker}
	I0923 10:42:50.824651 2666299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:42:50.836877 2666299 status.go:176] ha-826887-m04 status: &{Name:ha-826887-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-826887 node start m02 -v=7 --alsologtostderr: (29.842640374s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr: (1.113363596s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.020512517s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-826887 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-826887 -v=7 --alsologtostderr
E0923 10:43:41.158993 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:41.165461 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:41.176911 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:41.198431 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:41.239943 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:41.321368 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:41.482917 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:41.804602 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:42.446012 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:43.727427 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:46.289177 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:51.411147 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-826887 -v=7 --alsologtostderr: (37.101428904s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-826887 --wait=true -v=7 --alsologtostderr
E0923 10:44:01.652775 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:44:22.134719 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:44:28.407038 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:44:56.112244 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:45:03.096120 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-826887 --wait=true -v=7 --alsologtostderr: (1m46.031609929s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-826887
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-826887 node delete m03 -v=7 --alsologtostderr: (9.602787761s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 stop -v=7 --alsologtostderr
E0923 10:46:25.017616 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-826887 stop -v=7 --alsologtostderr: (35.872250541s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr: exit status 7 (109.272384ms)

                                                
                                                
-- stdout --
	ha-826887
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-826887-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-826887-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:46:34.178475 2680610 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:46:34.178701 2680610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:46:34.178729 2680610 out.go:358] Setting ErrFile to fd 2...
	I0923 10:46:34.178746 2680610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:46:34.179027 2680610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
	I0923 10:46:34.179261 2680610 out.go:352] Setting JSON to false
	I0923 10:46:34.179373 2680610 mustload.go:65] Loading cluster: ha-826887
	I0923 10:46:34.179430 2680610 notify.go:220] Checking for updates...
	I0923 10:46:34.180491 2680610 config.go:182] Loaded profile config "ha-826887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 10:46:34.180534 2680610 status.go:174] checking status of ha-826887 ...
	I0923 10:46:34.181143 2680610 cli_runner.go:164] Run: docker container inspect ha-826887 --format={{.State.Status}}
	I0923 10:46:34.198409 2680610 status.go:364] ha-826887 host status = "Stopped" (err=<nil>)
	I0923 10:46:34.198429 2680610 status.go:377] host is not running, skipping remaining checks
	I0923 10:46:34.198436 2680610 status.go:176] ha-826887 status: &{Name:ha-826887 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:46:34.198457 2680610 status.go:174] checking status of ha-826887-m02 ...
	I0923 10:46:34.198780 2680610 cli_runner.go:164] Run: docker container inspect ha-826887-m02 --format={{.State.Status}}
	I0923 10:46:34.220757 2680610 status.go:364] ha-826887-m02 host status = "Stopped" (err=<nil>)
	I0923 10:46:34.220776 2680610 status.go:377] host is not running, skipping remaining checks
	I0923 10:46:34.220784 2680610 status.go:176] ha-826887-m02 status: &{Name:ha-826887-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:46:34.220802 2680610 status.go:174] checking status of ha-826887-m04 ...
	I0923 10:46:34.221093 2680610 cli_runner.go:164] Run: docker container inspect ha-826887-m04 --format={{.State.Status}}
	I0923 10:46:34.240261 2680610 status.go:364] ha-826887-m04 host status = "Stopped" (err=<nil>)
	I0923 10:46:34.240282 2680610 status.go:377] host is not running, skipping remaining checks
	I0923 10:46:34.240290 2680610 status.go:176] ha-826887-m04 status: &{Name:ha-826887-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-826887 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-826887 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.435551858s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-826887 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-826887 --control-plane -v=7 --alsologtostderr: (42.743784215s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-826887 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.035622606s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-343611 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0923 10:48:41.160052 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:49:08.859617 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-343611 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (50.493895331s)
--- PASS: TestJSONOutput/start/Command (50.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-343611 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-343611 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-343611 --output=json --user=testUser
E0923 10:49:28.407496 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-343611 --output=json --user=testUser: (5.796302742s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-386878 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-386878 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.977911ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6efe3e01-0d95-4e4d-8886-3901f401fbe5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-386878] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"865a95b4-f7c8-4e84-8db0-a2b3ee98ea1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"e7382c52-b03c-4e12-91ec-fb637a3bfb31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b46bb790-f8e5-477e-b03b-e43a7dab1265","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig"}}
	{"specversion":"1.0","id":"d3199adc-424f-4ad9-8802-2aad550424ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube"}}
	{"specversion":"1.0","id":"bc5adbce-69d7-4f4d-847e-2d79ce2c2de1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d4f212d7-7b61-4cb2-84a8-378e5c457be9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"51246a2b-a6b0-40b3-bc88-04e323f86313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-386878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-386878
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-612803 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-612803 --network=: (35.722164636s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-612803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-612803
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-612803: (2.08432923s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.83s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-260918 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-260918 --network=bridge: (32.517445123s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-260918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-260918
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-260918: (1.904057473s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.44s)

                                                
                                    
x
+
TestKicExistingNetwork (31.52s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0923 10:50:50.485694 2613053 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 10:50:50.501452 2613053 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 10:50:50.502041 2613053 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0923 10:50:50.502080 2613053 cli_runner.go:164] Run: docker network inspect existing-network
W0923 10:50:50.524020 2613053 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0923 10:50:50.524059 2613053 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0923 10:50:50.524073 2613053 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0923 10:50:50.524180 2613053 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 10:50:50.539556 2613053 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-851c180d608c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:97:68:30:a3} reservation:<nil>}
I0923 10:50:50.544537 2613053 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0923 10:50:50.545033 2613053 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40012c6860}
I0923 10:50:50.545059 2613053 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0923 10:50:50.545113 2613053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0923 10:50:50.619612 2613053 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-631362 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-631362 --network=existing-network: (29.461366924s)
helpers_test.go:175: Cleaning up "existing-network-631362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-631362
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-631362: (1.896441481s)
I0923 10:51:21.993354 2613053 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.52s)

                                                
                                    
x
+
TestKicCustomSubnet (33.99s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-432452 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-432452 --subnet=192.168.60.0/24: (31.936292773s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-432452 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-432452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-432452
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-432452: (2.031121046s)
--- PASS: TestKicCustomSubnet (33.99s)

                                                
                                    
x
+
TestKicStaticIP (31.14s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-366685 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-366685 --static-ip=192.168.200.200: (28.857506407s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-366685 ip
helpers_test.go:175: Cleaning up "static-ip-366685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-366685
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-366685: (2.116842794s)
--- PASS: TestKicStaticIP (31.14s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (62.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-587851 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-587851 --driver=docker  --container-runtime=containerd: (27.776453942s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-590501 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-590501 --driver=docker  --container-runtime=containerd: (29.351298191s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-587851
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-590501
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-590501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-590501
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-590501: (1.925563035s)
helpers_test.go:175: Cleaning up "first-587851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-587851
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-587851: (2.161017419s)
--- PASS: TestMinikubeProfile (62.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-594122 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-594122 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.936412224s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-594122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-596124 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-596124 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.205704506s)
E0923 10:53:41.157574 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/StartWithMountSecond (6.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-596124 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-594122 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-594122 --alsologtostderr -v=5: (1.613999596s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-596124 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-596124
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-596124: (1.191961923s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-596124
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-596124: (6.320312268s)
--- PASS: TestMountStart/serial/RestartStopped (7.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-596124 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-934659 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0923 10:54:28.407787 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-934659 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.8398338s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-934659 -- rollout status deployment/busybox: (15.174058886s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- exec busybox-7dff88458-p84lg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- exec busybox-7dff88458-t8plg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- exec busybox-7dff88458-p84lg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- exec busybox-7dff88458-t8plg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- exec busybox-7dff88458-p84lg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- exec busybox-7dff88458-t8plg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- exec busybox-7dff88458-p84lg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- exec busybox-7dff88458-p84lg -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- exec busybox-7dff88458-t8plg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-934659 -- exec busybox-7dff88458-t8plg -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-934659 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-934659 -v 3 --alsologtostderr: (16.978901085s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-934659 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp testdata/cp-test.txt multinode-934659:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp multinode-934659:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3727047291/001/cp-test_multinode-934659.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp multinode-934659:/home/docker/cp-test.txt multinode-934659-m02:/home/docker/cp-test_multinode-934659_multinode-934659-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m02 "sudo cat /home/docker/cp-test_multinode-934659_multinode-934659-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp multinode-934659:/home/docker/cp-test.txt multinode-934659-m03:/home/docker/cp-test_multinode-934659_multinode-934659-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m03 "sudo cat /home/docker/cp-test_multinode-934659_multinode-934659-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp testdata/cp-test.txt multinode-934659-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp multinode-934659-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3727047291/001/cp-test_multinode-934659-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp multinode-934659-m02:/home/docker/cp-test.txt multinode-934659:/home/docker/cp-test_multinode-934659-m02_multinode-934659.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659 "sudo cat /home/docker/cp-test_multinode-934659-m02_multinode-934659.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp multinode-934659-m02:/home/docker/cp-test.txt multinode-934659-m03:/home/docker/cp-test_multinode-934659-m02_multinode-934659-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m03 "sudo cat /home/docker/cp-test_multinode-934659-m02_multinode-934659-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp testdata/cp-test.txt multinode-934659-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp multinode-934659-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3727047291/001/cp-test_multinode-934659-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp multinode-934659-m03:/home/docker/cp-test.txt multinode-934659:/home/docker/cp-test_multinode-934659-m03_multinode-934659.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659 "sudo cat /home/docker/cp-test_multinode-934659-m03_multinode-934659.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 cp multinode-934659-m03:/home/docker/cp-test.txt multinode-934659-m02:/home/docker/cp-test_multinode-934659-m03_multinode-934659-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 ssh -n multinode-934659-m02 "sudo cat /home/docker/cp-test_multinode-934659-m03_multinode-934659-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-934659 node stop m03: (1.221546948s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 status
E0923 10:55:51.473579 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-934659 status: exit status 7 (518.01066ms)

                                                
                                                
-- stdout --
	multinode-934659
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-934659-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-934659-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-934659 status --alsologtostderr: exit status 7 (560.42128ms)

                                                
                                                
-- stdout --
	multinode-934659
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-934659-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-934659-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:55:51.662022 2733844 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:55:51.662177 2733844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:55:51.662194 2733844 out.go:358] Setting ErrFile to fd 2...
	I0923 10:55:51.662200 2733844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:55:51.662457 2733844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
	I0923 10:55:51.662649 2733844 out.go:352] Setting JSON to false
	I0923 10:55:51.662683 2733844 mustload.go:65] Loading cluster: multinode-934659
	I0923 10:55:51.662780 2733844 notify.go:220] Checking for updates...
	I0923 10:55:51.663123 2733844 config.go:182] Loaded profile config "multinode-934659": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 10:55:51.663147 2733844 status.go:174] checking status of multinode-934659 ...
	I0923 10:55:51.664070 2733844 cli_runner.go:164] Run: docker container inspect multinode-934659 --format={{.State.Status}}
	I0923 10:55:51.684538 2733844 status.go:364] multinode-934659 host status = "Running" (err=<nil>)
	I0923 10:55:51.684571 2733844 host.go:66] Checking if "multinode-934659" exists ...
	I0923 10:55:51.685540 2733844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-934659
	I0923 10:55:51.715536 2733844 host.go:66] Checking if "multinode-934659" exists ...
	I0923 10:55:51.715867 2733844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:55:51.715918 2733844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-934659
	I0923 10:55:51.734354 2733844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41561 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/multinode-934659/id_rsa Username:docker}
	I0923 10:55:51.828317 2733844 ssh_runner.go:195] Run: systemctl --version
	I0923 10:55:51.832800 2733844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:55:51.844650 2733844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:55:51.908261 2733844 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-23 10:55:51.889882809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:55:51.908855 2733844 kubeconfig.go:125] found "multinode-934659" server: "https://192.168.58.2:8443"
	I0923 10:55:51.908883 2733844 api_server.go:166] Checking apiserver status ...
	I0923 10:55:51.908925 2733844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:55:51.939852 2733844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	I0923 10:55:51.955798 2733844 api_server.go:182] apiserver freezer: "4:freezer:/docker/cec2a342348d844dfece8ec250b66d32eabda580c06c819d83cc3de00ab0a070/kubepods/burstable/pod8dbd59b5af1dc57c3d8b1ca984992741/6751930cd1597583b3588334dce32c6cc5ac7d06805c0a643863937653626b90"
	I0923 10:55:51.955876 2733844 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cec2a342348d844dfece8ec250b66d32eabda580c06c819d83cc3de00ab0a070/kubepods/burstable/pod8dbd59b5af1dc57c3d8b1ca984992741/6751930cd1597583b3588334dce32c6cc5ac7d06805c0a643863937653626b90/freezer.state
	I0923 10:55:51.966130 2733844 api_server.go:204] freezer state: "THAWED"
	I0923 10:55:51.966157 2733844 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0923 10:55:51.973748 2733844 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0923 10:55:51.973775 2733844 status.go:456] multinode-934659 apiserver status = Running (err=<nil>)
	I0923 10:55:51.973786 2733844 status.go:176] multinode-934659 status: &{Name:multinode-934659 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:55:51.973803 2733844 status.go:174] checking status of multinode-934659-m02 ...
	I0923 10:55:51.974112 2733844 cli_runner.go:164] Run: docker container inspect multinode-934659-m02 --format={{.State.Status}}
	I0923 10:55:51.991101 2733844 status.go:364] multinode-934659-m02 host status = "Running" (err=<nil>)
	I0923 10:55:51.991126 2733844 host.go:66] Checking if "multinode-934659-m02" exists ...
	I0923 10:55:51.991505 2733844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-934659-m02
	I0923 10:55:52.013161 2733844 host.go:66] Checking if "multinode-934659-m02" exists ...
	I0923 10:55:52.013491 2733844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:55:52.013539 2733844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-934659-m02
	I0923 10:55:52.034654 2733844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41566 SSHKeyPath:/home/jenkins/minikube-integration/19689-2607666/.minikube/machines/multinode-934659-m02/id_rsa Username:docker}
	I0923 10:55:52.128593 2733844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:55:52.140596 2733844 status.go:176] multinode-934659-m02 status: &{Name:multinode-934659-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:55:52.140645 2733844 status.go:174] checking status of multinode-934659-m03 ...
	I0923 10:55:52.140948 2733844 cli_runner.go:164] Run: docker container inspect multinode-934659-m03 --format={{.State.Status}}
	I0923 10:55:52.162862 2733844 status.go:364] multinode-934659-m03 host status = "Stopped" (err=<nil>)
	I0923 10:55:52.162888 2733844 status.go:377] host is not running, skipping remaining checks
	I0923 10:55:52.162896 2733844 status.go:176] multinode-934659-m03 status: &{Name:multinode-934659-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-934659 node start m03 -v=7 --alsologtostderr: (9.051419925s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (101.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-934659
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-934659
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-934659: (25.071186304s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-934659 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-934659 --wait=true -v=8 --alsologtostderr: (1m16.694252275s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-934659
--- PASS: TestMultiNode/serial/RestartKeepsNodes (101.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-934659 node delete m03: (4.901735612s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-934659 stop: (23.845785155s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-934659 status: exit status 7 (96.052422ms)

                                                
                                                
-- stdout --
	multinode-934659
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-934659-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-934659 status --alsologtostderr: exit status 7 (84.410987ms)

                                                
                                                
-- stdout --
	multinode-934659
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-934659-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:58:13.461690 2742176 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:58:13.461824 2742176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:58:13.461836 2742176 out.go:358] Setting ErrFile to fd 2...
	I0923 10:58:13.461842 2742176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:58:13.462082 2742176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
	I0923 10:58:13.462282 2742176 out.go:352] Setting JSON to false
	I0923 10:58:13.462325 2742176 mustload.go:65] Loading cluster: multinode-934659
	I0923 10:58:13.462427 2742176 notify.go:220] Checking for updates...
	I0923 10:58:13.462740 2742176 config.go:182] Loaded profile config "multinode-934659": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 10:58:13.462763 2742176 status.go:174] checking status of multinode-934659 ...
	I0923 10:58:13.463667 2742176 cli_runner.go:164] Run: docker container inspect multinode-934659 --format={{.State.Status}}
	I0923 10:58:13.482176 2742176 status.go:364] multinode-934659 host status = "Stopped" (err=<nil>)
	I0923 10:58:13.482251 2742176 status.go:377] host is not running, skipping remaining checks
	I0923 10:58:13.482258 2742176 status.go:176] multinode-934659 status: &{Name:multinode-934659 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:58:13.482283 2742176 status.go:174] checking status of multinode-934659-m02 ...
	I0923 10:58:13.482676 2742176 cli_runner.go:164] Run: docker container inspect multinode-934659-m02 --format={{.State.Status}}
	I0923 10:58:13.500807 2742176 status.go:364] multinode-934659-m02 host status = "Stopped" (err=<nil>)
	I0923 10:58:13.500831 2742176 status.go:377] host is not running, skipping remaining checks
	I0923 10:58:13.500838 2742176 status.go:176] multinode-934659-m02 status: &{Name:multinode-934659-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-934659 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0923 10:58:41.157787 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-934659 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.69060366s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-934659 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-934659
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-934659-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-934659-m02 --driver=docker  --container-runtime=containerd: exit status 14 (86.805808ms)

                                                
                                                
-- stdout --
	* [multinode-934659-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-934659-m02' is duplicated with machine name 'multinode-934659-m02' in profile 'multinode-934659'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-934659-m03 --driver=docker  --container-runtime=containerd
E0923 10:59:28.407580 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-934659-m03 --driver=docker  --container-runtime=containerd: (31.508342847s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-934659
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-934659: exit status 80 (333.110691ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-934659 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-934659-m03 already exists in multinode-934659-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-934659-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-934659-m03: (1.962563086s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.94s)

                                                
                                    
x
+
TestPreload (125.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-611554 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0923 11:00:04.221520 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-611554 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m26.280919446s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-611554 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-611554 image pull gcr.io/k8s-minikube/busybox: (2.01240372s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-611554
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-611554: (12.048453109s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-611554 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-611554 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (22.357525468s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-611554 image list
helpers_test.go:175: Cleaning up "test-preload-611554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-611554
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-611554: (2.509265184s)
--- PASS: TestPreload (125.50s)

                                                
                                    
x
+
TestScheduledStopUnix (107.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-380122 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-380122 --memory=2048 --driver=docker  --container-runtime=containerd: (31.067676728s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-380122 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-380122 -n scheduled-stop-380122
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-380122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 11:02:21.815176 2613053 retry.go:31] will retry after 87.615µs: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.815354 2613053 retry.go:31] will retry after 110.131µs: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.816485 2613053 retry.go:31] will retry after 295.283µs: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.817619 2613053 retry.go:31] will retry after 373.944µs: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.818751 2613053 retry.go:31] will retry after 549.374µs: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.819858 2613053 retry.go:31] will retry after 941.324µs: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.820931 2613053 retry.go:31] will retry after 1.185622ms: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.823048 2613053 retry.go:31] will retry after 2.336901ms: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.826179 2613053 retry.go:31] will retry after 1.635708ms: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.828333 2613053 retry.go:31] will retry after 4.819169ms: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.833508 2613053 retry.go:31] will retry after 5.666636ms: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.839725 2613053 retry.go:31] will retry after 8.850806ms: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.848953 2613053 retry.go:31] will retry after 14.385836ms: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.864146 2613053 retry.go:31] will retry after 16.73325ms: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.881407 2613053 retry.go:31] will retry after 19.447057ms: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
I0923 11:02:21.902011 2613053 retry.go:31] will retry after 55.32948ms: open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/scheduled-stop-380122/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-380122 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-380122 -n scheduled-stop-380122
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-380122
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-380122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-380122
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-380122: exit status 7 (64.705412ms)

                                                
                                                
-- stdout --
	scheduled-stop-380122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-380122 -n scheduled-stop-380122
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-380122 -n scheduled-stop-380122: exit status 7 (65.156626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-380122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-380122
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-380122: (4.530184509s)
--- PASS: TestScheduledStopUnix (107.10s)

                                                
                                    
x
+
TestInsufficientStorage (10.58s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-231996 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E0923 11:03:41.158542 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-231996 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.173117985s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1cf4b8ae-b2d1-47a1-b03f-025fb7d9fe52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-231996] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a36a75ca-35dc-4d24-b9bf-cc2f684722e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"15a13941-2a31-4444-9e76-d8c2cf04bcad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a28124f3-cc89-47d4-bff0-16e82859f814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig"}}
	{"specversion":"1.0","id":"a9429057-9e1a-44cd-8657-a9701198adb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube"}}
	{"specversion":"1.0","id":"d9a2e6f2-d145-4b84-9999-409e9b17a7aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2ed0aa65-4572-44c2-86d4-0615fa16c7eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"15775429-409c-4140-aa63-9471b88e419d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a79a789e-2d1d-4465-b8ab-ecee70ded1bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"dc95bdd1-b596-41f7-9633-72c03c8ee606","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"982f6218-3a39-48c4-a201-2407df42bf14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e8a8ea84-b182-47de-8dbf-8ddd7c7f3255","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-231996\" primary control-plane node in \"insufficient-storage-231996\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c487da81-31b9-437f-b311-53e5067264c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bde5477a-b96d-446b-899a-ada619ec20c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a95049c7-d763-4135-8a35-a129466eb89b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-231996 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-231996 --output=json --layout=cluster: exit status 7 (278.537451ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-231996","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-231996","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:03:45.805059 2760556 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-231996" does not appear in /home/jenkins/minikube-integration/19689-2607666/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-231996 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-231996 --output=json --layout=cluster: exit status 7 (272.502122ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-231996","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-231996","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:03:46.077062 2760616 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-231996" does not appear in /home/jenkins/minikube-integration/19689-2607666/kubeconfig
	E0923 11:03:46.087712 2760616 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/insufficient-storage-231996/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-231996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-231996
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-231996: (1.859924886s)
--- PASS: TestInsufficientStorage (10.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (91.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1120795547 start -p running-upgrade-111182 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1120795547 start -p running-upgrade-111182 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.19084287s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-111182 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-111182 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.465070502s)
helpers_test.go:175: Cleaning up "running-upgrade-111182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-111182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-111182: (3.111604314s)
--- PASS: TestRunningBinaryUpgrade (91.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-785263 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-785263 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.43750016s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-785263
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-785263: (1.231140676s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-785263 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-785263 status --format={{.Host}}: exit status 7 (70.649035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-785263 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-785263 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m37.361831196s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-785263 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-785263 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-785263 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (85.25626ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-785263] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-785263
	    minikube start -p kubernetes-upgrade-785263 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7852632 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-785263 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-785263 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-785263 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (10.029684616s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-785263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-785263
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-785263: (4.588618625s)
--- PASS: TestKubernetesUpgrade (354.93s)

                                                
                                    
x
+
TestMissingContainerUpgrade (172.29s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3166471581 start -p missing-upgrade-660991 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3166471581 start -p missing-upgrade-660991 --memory=2200 --driver=docker  --container-runtime=containerd: (1m34.085724268s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-660991
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-660991: (10.260583931s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-660991
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-660991 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-660991 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.345354278s)
helpers_test.go:175: Cleaning up "missing-upgrade-660991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-660991
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-660991: (2.350644037s)
--- PASS: TestMissingContainerUpgrade (172.29s)

                                                
                                    
x
+
TestPause/serial/Start (63.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-292332 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-292332 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m3.891210198s)
--- PASS: TestPause/serial/Start (63.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-656285 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-656285 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (97.737393ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-656285] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-656285 --driver=docker  --container-runtime=containerd
E0923 11:04:28.407351 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-656285 --driver=docker  --container-runtime=containerd: (41.867701932s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-656285 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-656285 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-656285 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.245904288s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-656285 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-656285 status -o json: exit status 2 (310.188264ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-656285","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-656285
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-656285: (1.928558066s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-656285 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-656285 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.509943735s)
--- PASS: TestNoKubernetes/serial/Start (5.51s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-292332 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-292332 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.998515634s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-656285 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-656285 "sudo systemctl is-active --quiet service kubelet": exit status 1 (313.031041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-656285
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-656285: (1.32408265s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-656285 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-656285 --driver=docker  --container-runtime=containerd: (7.273499287s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-292332 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-292332 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-292332 --output=json --layout=cluster: exit status 2 (304.927446ms)

                                                
                                                
-- stdout --
	{"Name":"pause-292332","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-292332","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.15s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-292332 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-292332 --alsologtostderr -v=5: (1.151420352s)
--- PASS: TestPause/serial/Unpause (1.15s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.17s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-292332 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-292332 --alsologtostderr -v=5: (1.17092787s)
--- PASS: TestPause/serial/PauseAgain (1.17s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-292332 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-292332 --alsologtostderr -v=5: (2.862076434s)
--- PASS: TestPause/serial/DeletePaused (2.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-656285 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-656285 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.791905ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-292332
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-292332: exit status 1 (23.198842ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-292332: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3297131632 start -p stopped-upgrade-782301 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0923 11:08:41.157641 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3297131632 start -p stopped-upgrade-782301 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.399744662s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3297131632 -p stopped-upgrade-782301 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3297131632 -p stopped-upgrade-782301 stop: (20.129540674s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-782301 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0923 11:09:28.406959 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-782301 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.87974654s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (106.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-782301
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-782301: (1.108540549s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-605708 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-605708 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (208.81132ms)

                                                
                                                
-- stdout --
	* [false-605708] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:11:24.641808 2800442 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:11:24.642063 2800442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:11:24.642085 2800442 out.go:358] Setting ErrFile to fd 2...
	I0923 11:11:24.642106 2800442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:11:24.642359 2800442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2607666/.minikube/bin
	I0923 11:11:24.642784 2800442 out.go:352] Setting JSON to false
	I0923 11:11:24.643781 2800442 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":154432,"bootTime":1726935453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 11:11:24.643872 2800442 start.go:139] virtualization:  
	I0923 11:11:24.647364 2800442 out.go:177] * [false-605708] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 11:11:24.650125 2800442 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:11:24.650184 2800442 notify.go:220] Checking for updates...
	I0923 11:11:24.661367 2800442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:11:24.669848 2800442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-2607666/kubeconfig
	I0923 11:11:24.671713 2800442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2607666/.minikube
	I0923 11:11:24.673523 2800442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 11:11:24.675299 2800442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:11:24.677509 2800442 config.go:182] Loaded profile config "force-systemd-flag-950598": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:11:24.677611 2800442 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:11:24.699813 2800442 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 11:11:24.699939 2800442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:11:24.782472 2800442 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 11:11:24.769451633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:11:24.782584 2800442 docker.go:318] overlay module found
	I0923 11:11:24.784681 2800442 out.go:177] * Using the docker driver based on user configuration
	I0923 11:11:24.786611 2800442 start.go:297] selected driver: docker
	I0923 11:11:24.786625 2800442 start.go:901] validating driver "docker" against <nil>
	I0923 11:11:24.786639 2800442 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:11:24.789251 2800442 out.go:201] 
	W0923 11:11:24.791093 2800442 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0923 11:11:24.793023 2800442 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-605708 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-605708" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-605708

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605708"

                                                
                                                
----------------------- debugLogs end: false-605708 [took: 4.281235368s] --------------------------------
helpers_test.go:175: Cleaning up "false-605708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-605708
--- PASS: TestNetworkPlugins/group/false (4.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (131.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-815973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0923 11:13:41.157055 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:28.407412 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-815973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m11.006286817s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (131.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-815973 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fe709d6c-0c9a-4f8f-aeaa-20f22db3c5e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fe709d6c-0c9a-4f8f-aeaa-20f22db3c5e5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004043094s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-815973 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-815973 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-815973 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-815973 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-815973 --alsologtostderr -v=3: (12.143979108s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-815973 -n old-k8s-version-815973
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-815973 -n old-k8s-version-815973: exit status 7 (108.827353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-815973 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (151.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-815973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-815973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m30.726136039s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-815973 -n old-k8s-version-815973
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (151.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-669153 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-669153 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m4.579342731s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-669153 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b095579a-6ae3-4ff0-b021-5f6d80a461db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b095579a-6ae3-4ff0-b021-5f6d80a461db] Running
E0923 11:16:44.223533 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003529842s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-669153 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-669153 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-669153 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011836383s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-669153 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-669153 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-669153 --alsologtostderr -v=3: (12.217272101s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-669153 -n no-preload-669153
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-669153 -n no-preload-669153: exit status 7 (81.16142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-669153 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-669153 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-669153 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.458163954s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-669153 -n no-preload-669153
E0923 11:21:28.432853 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8c8qn" [012b4bf6-f7bf-4b40-98d1-3ea7d5077d45] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003835252s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8c8qn" [012b4bf6-f7bf-4b40-98d1-3ea7d5077d45] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004148067s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-815973 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-815973 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-815973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-815973 -n old-k8s-version-815973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-815973 -n old-k8s-version-815973: exit status 2 (323.238505ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-815973 -n old-k8s-version-815973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-815973 -n old-k8s-version-815973: exit status 2 (330.593239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-815973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-815973 -n old-k8s-version-815973
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-815973 -n old-k8s-version-815973
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-908057 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 11:18:41.157636 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-908057 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (53.6709823s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-908057 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9bf983f7-4152-490c-a751-ec9b4377a28b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9bf983f7-4152-490c-a751-ec9b4377a28b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003821341s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-908057 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-908057 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-908057 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.055993943s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-908057 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-908057 --alsologtostderr -v=3
E0923 11:19:28.407468 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-908057 --alsologtostderr -v=3: (12.03878617s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-908057 -n embed-certs-908057
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-908057 -n embed-certs-908057: exit status 7 (67.765915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-908057 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (269.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-908057 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 11:20:06.494764 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.501121 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.512560 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.533925 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.575341 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.656823 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.818272 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:07.139976 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:07.782160 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:09.064508 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:11.625987 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:16.747769 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:26.989693 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:47.471163 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-908057 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m29.171111455s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-908057 -n embed-certs-908057
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (269.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fpvdn" [48de4221-f296-4d20-9cc3-87a988b0ebe0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003239844s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fpvdn" [48de4221-f296-4d20-9cc3-87a988b0ebe0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003623152s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-669153 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-669153 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-669153 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-669153 -n no-preload-669153
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-669153 -n no-preload-669153: exit status 2 (335.786047ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-669153 -n no-preload-669153
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-669153 -n no-preload-669153: exit status 2 (372.498039ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-669153 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-669153 -n no-preload-669153
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-669153 -n no-preload-669153
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-022757 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 11:22:50.354113 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-022757 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m27.542389835s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-022757 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da81a480-d7cc-4fd7-9a23-ebf8d1d23200] Pending
helpers_test.go:344: "busybox" [da81a480-d7cc-4fd7-9a23-ebf8d1d23200] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [da81a480-d7cc-4fd7-9a23-ebf8d1d23200] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004479026s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-022757 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-022757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-022757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126182812s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-022757 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-022757 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-022757 --alsologtostderr -v=3: (12.116840972s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-022757 -n default-k8s-diff-port-022757
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-022757 -n default-k8s-diff-port-022757: exit status 7 (76.141913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-022757 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (278.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-022757 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 11:23:41.157153 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-022757 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m37.622293446s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-022757 -n default-k8s-diff-port-022757
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (278.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wdcq7" [ec03de80-44a6-4f52-97fa-753d1e7832cb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005884654s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wdcq7" [ec03de80-44a6-4f52-97fa-753d1e7832cb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004923704s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-908057 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-908057 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-908057 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-908057 -n embed-certs-908057
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-908057 -n embed-certs-908057: exit status 2 (323.159951ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-908057 -n embed-certs-908057
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-908057 -n embed-certs-908057: exit status 2 (314.908934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-908057 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-908057 -n embed-certs-908057
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-908057 -n embed-certs-908057
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-757115 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 11:24:28.407113 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-757115 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (34.487487431s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-757115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-757115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.170510485s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-757115 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-757115 --alsologtostderr -v=3: (1.262987582s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-757115 -n newest-cni-757115
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-757115 -n newest-cni-757115: exit status 7 (65.709818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-757115 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-757115 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 11:25:06.495460 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-757115 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (15.547030338s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-757115 -n newest-cni-757115
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-757115 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-757115 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-757115 --alsologtostderr -v=1: (1.044319931s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-757115 -n newest-cni-757115
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-757115 -n newest-cni-757115: exit status 2 (328.784387ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-757115 -n newest-cni-757115
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-757115 -n newest-cni-757115: exit status 2 (317.507447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-757115 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-757115 -n newest-cni-757115
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-757115 -n newest-cni-757115
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0923 11:25:34.196508 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:39.211933 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:39.218349 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:39.229748 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:39.251181 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:39.292824 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:39.374220 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:39.535834 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:39.857580 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:40.499532 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:41.781191 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:26:44.343208 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m26.628609107s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-605708 "pgrep -a kubelet"
I0923 11:26:47.394356 2613053 config.go:182] Loaded profile config "auto-605708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-605708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cfz7h" [36aa91ac-be68-4023-a4ec-d4757248b2be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:26:49.465568 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-cfz7h" [36aa91ac-be68-4023-a4ec-d4757248b2be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003738118s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-605708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0923 11:27:20.189692 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:28:01.152177 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (48.075387121s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-kf6nz" [aeb7d9bb-43c6-4948-9fe5-e38b67f4bb82] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004133456s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-605708 "pgrep -a kubelet"
I0923 11:28:13.100688 2613053 config.go:182] Loaded profile config "kindnet-605708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-605708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xh9kc" [16998965-7fec-4bce-8138-3535d2b457c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xh9kc" [16998965-7fec-4bce-8138-3535d2b457c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004552122s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-n2rt7" [6cb49cbf-b553-43bf-9f0e-4ae1dca0c3e0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003891012s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-n2rt7" [6cb49cbf-b553-43bf-9f0e-4ae1dca0c3e0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00365255s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-022757 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-605708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-022757 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-022757 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-022757 -n default-k8s-diff-port-022757
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-022757 -n default-k8s-diff-port-022757: exit status 2 (323.875551ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-022757 -n default-k8s-diff-port-022757
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-022757 -n default-k8s-diff-port-022757: exit status 2 (320.325158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-022757 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-022757 --alsologtostderr -v=1: (1.022261565s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-022757 -n default-k8s-diff-port-022757
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-022757 -n default-k8s-diff-port-022757
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.96s)
E0923 11:33:06.814082 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:06.820528 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:06.832043 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:06.853657 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:06.895166 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:06.976649 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:07.138221 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:07.460071 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:08.101768 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:09.383537 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:09.600091 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:11.945586 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:14.549600 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:14.555964 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:14.567425 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:14.588845 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:14.630476 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:14.711972 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:14.873565 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:15.195203 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:15.837300 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:17.067210 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:17.119651 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:19.681876 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:24.225130 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:24.803511 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:27.309484 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:35.045060 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/default-k8s-diff-port-022757/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0923 11:28:41.157100 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m21.33219164s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0923 11:29:11.477844 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:23.073540 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:28.407795 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/addons-895903/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.803996847s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-605708 "pgrep -a kubelet"
I0923 11:29:45.763457 2613053 config.go:182] Loaded profile config "custom-flannel-605708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-605708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n5d68" [a5bb86d4-8a2a-4108-98b6-65a88c446b7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n5d68" [a5bb86d4-8a2a-4108-98b6-65a88c446b7b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004318847s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9sw8w" [5490c817-e415-47d3-8343-5ef212f59363] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.040400139s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-605708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-605708 "pgrep -a kubelet"
I0923 11:30:00.912075 2613053 config.go:182] Loaded profile config "calico-605708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-605708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f48kb" [7fbe3da2-332f-427e-9f2a-d6c1e521c55b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f48kb" [7fbe3da2-332f-427e-9f2a-d6c1e521c55b] Running
E0923 11:30:06.494034 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/old-k8s-version-815973/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004111905s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-605708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m24.31975975s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (52.599380826s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-fkjm6" [b7bf8c80-8e15-4d78-8785-4c96db58b0fe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004064934s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-605708 "pgrep -a kubelet"
I0923 11:31:37.057724 2613053 config.go:182] Loaded profile config "flannel-605708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (24.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-605708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-snhxt" [8ec545d3-b2f6-47cf-a854-8e950d6b513b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:31:39.212278 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-snhxt" [8ec545d3-b2f6-47cf-a854-8e950d6b513b] Running
E0923 11:31:57.913444 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 24.00422729s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (24.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-605708 "pgrep -a kubelet"
I0923 11:31:44.369496 2613053 config.go:182] Loaded profile config "enable-default-cni-605708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (25.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-605708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w57jz" [5cc83994-64ed-496b-a28c-8b45a15bbad0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:31:47.659894 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:47.666611 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:47.678135 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:47.699726 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:47.741284 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:47.822834 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:47.984378 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:48.306034 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:48.947458 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:50.229132 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:52.791347 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-w57jz" [5cc83994-64ed-496b-a28c-8b45a15bbad0] Running
E0923 11:32:06.915186 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/no-preload-669153/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:08.155463 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 25.004868699s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (25.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-605708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-605708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0923 11:32:28.637627 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/auto-605708/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-605708 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m13.736912219s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-605708 "pgrep -a kubelet"
I0923 11:33:39.673248 2613053 config.go:182] Loaded profile config "bridge-605708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-605708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fjqxf" [327360af-2621-4e5a-8a16-ddd1d89579e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:33:41.157383 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/functional-238803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-fjqxf" [327360af-2621-4e5a-8a16-ddd1d89579e6] Running
E0923 11:33:47.791587 2613053 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2607666/.minikube/profiles/kindnet-605708/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003661956s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-605708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-605708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-775785 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-775785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-775785
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-981767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-981767
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-605708 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-605708" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-605708

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605708"

                                                
                                                
----------------------- debugLogs end: kubenet-605708 [took: 4.210883454s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-605708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-605708
--- SKIP: TestNetworkPlugins/group/kubenet (4.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-605708 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-605708" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-605708

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-605708" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605708"

                                                
                                                
----------------------- debugLogs end: cilium-605708 [took: 5.760787416s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-605708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-605708
--- SKIP: TestNetworkPlugins/group/cilium (6.01s)

                                                
                                    
Copied to clipboard