Test Report: Docker_Linux_containerd_arm64 19468

                    
                      91a16964608358fea9174134e48bcab54b5c9be6:2024-08-19:35860
                    
                

Test fail (1/328)

Order failed test Duration
29 TestAddons/serial/Volcano 200.25
x
+
TestAddons/serial/Volcano (200.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 41.719015ms
addons_test.go:913: volcano-controller stabilized in 41.883945ms
addons_test.go:905: volcano-admission stabilized in 41.930993ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-bbv4d" [43d20fb9-da3a-483b-bd83-f8b4f5170274] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00452752s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-kmj97" [7cf0731e-8d33-4380-9425-2f51db81929e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.006052743s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-2pcjc" [e36a1ac7-31fd-49d5-a5ff-9f534a265247] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004071564s
addons_test.go:932: (dbg) Run:  kubectl --context addons-764717 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-764717 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-764717 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5787faf1-7ab9-4f1f-8cb7-b22508fb8326] Pending
helpers_test.go:344: "test-job-nginx-0" [5787faf1-7ab9-4f1f-8cb7-b22508fb8326] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-764717 -n addons-764717
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-19 19:17:55.109317426 +0000 UTC m=+363.618326834
addons_test.go:964: (dbg) Run:  kubectl --context addons-764717 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-764717 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-8c573cbf-3068-497d-aeb1-c6fa5c219843
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-76wch (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-76wch:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-764717 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-764717 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-764717
helpers_test.go:235: (dbg) docker inspect addons-764717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9ad179f7ea78298879dc4d04a84f7a8d9ebb7b71ae69e5bfcb060df1a7f3da7",
	        "Created": "2024-08-19T19:12:31.711178327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 720319,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T19:12:31.854850207Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/b9ad179f7ea78298879dc4d04a84f7a8d9ebb7b71ae69e5bfcb060df1a7f3da7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9ad179f7ea78298879dc4d04a84f7a8d9ebb7b71ae69e5bfcb060df1a7f3da7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9ad179f7ea78298879dc4d04a84f7a8d9ebb7b71ae69e5bfcb060df1a7f3da7/hosts",
	        "LogPath": "/var/lib/docker/containers/b9ad179f7ea78298879dc4d04a84f7a8d9ebb7b71ae69e5bfcb060df1a7f3da7/b9ad179f7ea78298879dc4d04a84f7a8d9ebb7b71ae69e5bfcb060df1a7f3da7-json.log",
	        "Name": "/addons-764717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-764717:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-764717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0a38acd88d185330ad804e1c1f92811e28e504e9da12edcc93d2a817048e7d01-init/diff:/var/lib/docker/overlay2/de9f36de956227ec2a8fa0009f2a1a4a1b7ddd9f6c0c9cd88d55102cc724f5b5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a38acd88d185330ad804e1c1f92811e28e504e9da12edcc93d2a817048e7d01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a38acd88d185330ad804e1c1f92811e28e504e9da12edcc93d2a817048e7d01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a38acd88d185330ad804e1c1f92811e28e504e9da12edcc93d2a817048e7d01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-764717",
	                "Source": "/var/lib/docker/volumes/addons-764717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-764717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-764717",
	                "name.minikube.sigs.k8s.io": "addons-764717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f874f026b1c851e3226420df293e515838870bb66e4b73032808ad632e9c655",
	            "SandboxKey": "/var/run/docker/netns/7f874f026b1c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-764717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "560fe9c2d8b19272f4983c4274569c39b90c027386d8546f432a4f79cc3c1e3b",
	                    "EndpointID": "d58cd8a0e5189fa88a708ab82fba2dc44f86c56f75316942271bdbf4de643b3f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-764717",
	                        "b9ad179f7ea7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-764717 -n addons-764717
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-764717 logs -n 25: (1.787191677s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-477663   | jenkins | v1.33.1 | 19 Aug 24 19:11 UTC |                     |
	|         | -p download-only-477663              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 19 Aug 24 19:11 UTC | 19 Aug 24 19:11 UTC |
	| delete  | -p download-only-477663              | download-only-477663   | jenkins | v1.33.1 | 19 Aug 24 19:11 UTC | 19 Aug 24 19:11 UTC |
	| start   | -o=json --download-only              | download-only-172961   | jenkins | v1.33.1 | 19 Aug 24 19:11 UTC |                     |
	|         | -p download-only-172961              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC | 19 Aug 24 19:12 UTC |
	| delete  | -p download-only-172961              | download-only-172961   | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC | 19 Aug 24 19:12 UTC |
	| delete  | -p download-only-477663              | download-only-477663   | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC | 19 Aug 24 19:12 UTC |
	| delete  | -p download-only-172961              | download-only-172961   | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC | 19 Aug 24 19:12 UTC |
	| start   | --download-only -p                   | download-docker-715450 | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC |                     |
	|         | download-docker-715450               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-715450            | download-docker-715450 | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC | 19 Aug 24 19:12 UTC |
	| start   | --download-only -p                   | binary-mirror-918461   | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC |                     |
	|         | binary-mirror-918461                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33127               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-918461              | binary-mirror-918461   | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC | 19 Aug 24 19:12 UTC |
	| addons  | enable dashboard -p                  | addons-764717          | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC |                     |
	|         | addons-764717                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-764717          | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC |                     |
	|         | addons-764717                        |                        |         |         |                     |                     |
	| start   | -p addons-764717 --wait=true         | addons-764717          | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC | 19 Aug 24 19:14 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:12:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:12:07.160758  719822 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:12:07.160951  719822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:07.160963  719822 out.go:358] Setting ErrFile to fd 2...
	I0819 19:12:07.160970  719822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:07.161250  719822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
	I0819 19:12:07.161768  719822 out.go:352] Setting JSON to false
	I0819 19:12:07.162670  719822 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10469,"bootTime":1724084259,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 19:12:07.162746  719822 start.go:139] virtualization:  
	I0819 19:12:07.164784  719822 out.go:177] * [addons-764717] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 19:12:07.166222  719822 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:12:07.166360  719822 notify.go:220] Checking for updates...
	I0819 19:12:07.170240  719822 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:12:07.171941  719822 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	I0819 19:12:07.173487  719822 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	I0819 19:12:07.176274  719822 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 19:12:07.178755  719822 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:12:07.181111  719822 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:12:07.203446  719822 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 19:12:07.203586  719822 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:12:07.270178  719822 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 19:12:07.260207398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:12:07.270295  719822 docker.go:307] overlay module found
	I0819 19:12:07.273154  719822 out.go:177] * Using the docker driver based on user configuration
	I0819 19:12:07.276717  719822 start.go:297] selected driver: docker
	I0819 19:12:07.276741  719822 start.go:901] validating driver "docker" against <nil>
	I0819 19:12:07.276756  719822 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:12:07.277399  719822 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:12:07.332193  719822 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 19:12:07.321245657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:12:07.332367  719822 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 19:12:07.332615  719822 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:12:07.334282  719822 out.go:177] * Using Docker driver with root privileges
	I0819 19:12:07.335550  719822 cni.go:84] Creating CNI manager for ""
	I0819 19:12:07.335586  719822 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 19:12:07.335599  719822 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 19:12:07.335691  719822 start.go:340] cluster config:
	{Name:addons-764717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-764717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:07.337241  719822 out.go:177] * Starting "addons-764717" primary control-plane node in "addons-764717" cluster
	I0819 19:12:07.338996  719822 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 19:12:07.340404  719822 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 19:12:07.341814  719822 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 19:12:07.341895  719822 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-713648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 19:12:07.341915  719822 cache.go:56] Caching tarball of preloaded images
	I0819 19:12:07.341921  719822 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 19:12:07.341995  719822 preload.go:172] Found /home/jenkins/minikube-integration/19468-713648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 19:12:07.342006  719822 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0819 19:12:07.342332  719822 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/config.json ...
	I0819 19:12:07.342355  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/config.json: {Name:mk1c5f88d5cf803a36077b84c7c4e4bea4298b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:07.357845  719822 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 19:12:07.357981  719822 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 19:12:07.358000  719822 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 19:12:07.358005  719822 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 19:12:07.358014  719822 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 19:12:07.358019  719822 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 19:12:24.445880  719822 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 19:12:24.445922  719822 cache.go:194] Successfully downloaded all kic artifacts
	I0819 19:12:24.445969  719822 start.go:360] acquireMachinesLock for addons-764717: {Name:mkfd7403f53eb1c04aef8faba3ec9c2f14f2196a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:12:24.446440  719822 start.go:364] duration metric: took 442.269µs to acquireMachinesLock for "addons-764717"
	I0819 19:12:24.446480  719822 start.go:93] Provisioning new machine with config: &{Name:addons-764717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-764717 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 19:12:24.446576  719822 start.go:125] createHost starting for "" (driver="docker")
	I0819 19:12:24.448285  719822 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 19:12:24.448546  719822 start.go:159] libmachine.API.Create for "addons-764717" (driver="docker")
	I0819 19:12:24.448580  719822 client.go:168] LocalClient.Create starting
	I0819 19:12:24.448684  719822 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19468-713648/.minikube/certs/ca.pem
	I0819 19:12:24.963594  719822 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19468-713648/.minikube/certs/cert.pem
	I0819 19:12:25.342655  719822 cli_runner.go:164] Run: docker network inspect addons-764717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 19:12:25.358483  719822 cli_runner.go:211] docker network inspect addons-764717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 19:12:25.358595  719822 network_create.go:284] running [docker network inspect addons-764717] to gather additional debugging logs...
	I0819 19:12:25.358618  719822 cli_runner.go:164] Run: docker network inspect addons-764717
	W0819 19:12:25.375708  719822 cli_runner.go:211] docker network inspect addons-764717 returned with exit code 1
	I0819 19:12:25.375742  719822 network_create.go:287] error running [docker network inspect addons-764717]: docker network inspect addons-764717: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-764717 not found
	I0819 19:12:25.375756  719822 network_create.go:289] output of [docker network inspect addons-764717]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-764717 not found
	
	** /stderr **
	I0819 19:12:25.375857  719822 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 19:12:25.392405  719822 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017b2ba0}
	I0819 19:12:25.392452  719822 network_create.go:124] attempt to create docker network addons-764717 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 19:12:25.392512  719822 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-764717 addons-764717
	I0819 19:12:25.464692  719822 network_create.go:108] docker network addons-764717 192.168.49.0/24 created
	I0819 19:12:25.464721  719822 kic.go:121] calculated static IP "192.168.49.2" for the "addons-764717" container
	I0819 19:12:25.464811  719822 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 19:12:25.481875  719822 cli_runner.go:164] Run: docker volume create addons-764717 --label name.minikube.sigs.k8s.io=addons-764717 --label created_by.minikube.sigs.k8s.io=true
	I0819 19:12:25.497923  719822 oci.go:103] Successfully created a docker volume addons-764717
	I0819 19:12:25.498032  719822 cli_runner.go:164] Run: docker run --rm --name addons-764717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-764717 --entrypoint /usr/bin/test -v addons-764717:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 19:12:27.569307  719822 cli_runner.go:217] Completed: docker run --rm --name addons-764717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-764717 --entrypoint /usr/bin/test -v addons-764717:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (2.07122698s)
	I0819 19:12:27.569340  719822 oci.go:107] Successfully prepared a docker volume addons-764717
	I0819 19:12:27.569363  719822 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 19:12:27.569384  719822 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 19:12:27.569469  719822 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19468-713648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-764717:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 19:12:31.635370  719822 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19468-713648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-764717:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.065860347s)
	I0819 19:12:31.635405  719822 kic.go:203] duration metric: took 4.066018499s to extract preloaded images to volume ...
	W0819 19:12:31.635557  719822 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 19:12:31.635670  719822 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 19:12:31.696198  719822 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-764717 --name addons-764717 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-764717 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-764717 --network addons-764717 --ip 192.168.49.2 --volume addons-764717:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 19:12:32.024090  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Running}}
	I0819 19:12:32.045894  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:32.081280  719822 cli_runner.go:164] Run: docker exec addons-764717 stat /var/lib/dpkg/alternatives/iptables
	I0819 19:12:32.149792  719822 oci.go:144] the created container "addons-764717" has a running status.
	I0819 19:12:32.149821  719822 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa...
	I0819 19:12:32.498663  719822 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 19:12:32.521179  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:32.547187  719822 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 19:12:32.547207  719822 kic_runner.go:114] Args: [docker exec --privileged addons-764717 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 19:12:32.631275  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:32.651969  719822 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:32.652063  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:32.678291  719822 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:32.678557  719822 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0819 19:12:32.678566  719822 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:32.839661  719822 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-764717
	
	I0819 19:12:32.839735  719822 ubuntu.go:169] provisioning hostname "addons-764717"
	I0819 19:12:32.839838  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:32.877848  719822 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:32.878083  719822 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0819 19:12:32.878096  719822 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-764717 && echo "addons-764717" | sudo tee /etc/hostname
	I0819 19:12:33.029316  719822 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-764717
	
	I0819 19:12:33.029423  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:33.052662  719822 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:33.052921  719822 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0819 19:12:33.052938  719822 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-764717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-764717/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-764717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:33.189961  719822 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:33.189991  719822 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19468-713648/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-713648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-713648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-713648/.minikube}
	I0819 19:12:33.190025  719822 ubuntu.go:177] setting up certificates
	I0819 19:12:33.190040  719822 provision.go:84] configureAuth start
	I0819 19:12:33.190136  719822 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-764717
	I0819 19:12:33.209914  719822 provision.go:143] copyHostCerts
	I0819 19:12:33.210006  719822 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-713648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-713648/.minikube/ca.pem (1082 bytes)
	I0819 19:12:33.210136  719822 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-713648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-713648/.minikube/cert.pem (1123 bytes)
	I0819 19:12:33.210206  719822 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-713648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-713648/.minikube/key.pem (1679 bytes)
	I0819 19:12:33.210295  719822 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-713648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-713648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-713648/.minikube/certs/ca-key.pem org=jenkins.addons-764717 san=[127.0.0.1 192.168.49.2 addons-764717 localhost minikube]
	I0819 19:12:34.047265  719822 provision.go:177] copyRemoteCerts
	I0819 19:12:34.047345  719822 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:34.047394  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:34.064703  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:34.158733  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:34.184013  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 19:12:34.209795  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:34.234929  719822 provision.go:87] duration metric: took 1.044869285s to configureAuth
	I0819 19:12:34.234955  719822 ubuntu.go:193] setting minikube options for container-runtime
	I0819 19:12:34.235160  719822 config.go:182] Loaded profile config "addons-764717": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 19:12:34.235167  719822 machine.go:96] duration metric: took 1.583180671s to provisionDockerMachine
	I0819 19:12:34.235174  719822 client.go:171] duration metric: took 9.786584851s to LocalClient.Create
	I0819 19:12:34.235196  719822 start.go:167] duration metric: took 9.78665146s to libmachine.API.Create "addons-764717"
	I0819 19:12:34.235206  719822 start.go:293] postStartSetup for "addons-764717" (driver="docker")
	I0819 19:12:34.235216  719822 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:34.235268  719822 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:34.235308  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:34.251903  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:34.351162  719822 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:34.354447  719822 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 19:12:34.354491  719822 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 19:12:34.354517  719822 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 19:12:34.354525  719822 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 19:12:34.354535  719822 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-713648/.minikube/addons for local assets ...
	I0819 19:12:34.354606  719822 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-713648/.minikube/files for local assets ...
	I0819 19:12:34.354634  719822 start.go:296] duration metric: took 119.422375ms for postStartSetup
	I0819 19:12:34.354956  719822 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-764717
	I0819 19:12:34.372427  719822 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/config.json ...
	I0819 19:12:34.372791  719822 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:34.372854  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:34.390486  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:34.482578  719822 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 19:12:34.487233  719822 start.go:128] duration metric: took 10.040638855s to createHost
	I0819 19:12:34.487261  719822 start.go:83] releasing machines lock for "addons-764717", held for 10.04080225s
	I0819 19:12:34.487333  719822 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-764717
	I0819 19:12:34.505463  719822 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:34.505525  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:34.505805  719822 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:34.505875  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:34.529417  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:34.530192  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:34.768124  719822 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:34.772645  719822 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 19:12:34.776854  719822 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 19:12:34.802962  719822 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 19:12:34.803089  719822 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:34.833185  719822 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 19:12:34.833221  719822 start.go:495] detecting cgroup driver to use...
	I0819 19:12:34.833255  719822 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 19:12:34.833318  719822 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 19:12:34.846125  719822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 19:12:34.858401  719822 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:34.858496  719822 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:34.872594  719822 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:34.887527  719822 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:34.967724  719822 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:35.075534  719822 docker.go:233] disabling docker service ...
	I0819 19:12:35.075614  719822 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:35.097120  719822 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:35.110156  719822 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:35.206789  719822 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:35.298966  719822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:35.311059  719822 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:35.328452  719822 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 19:12:35.338490  719822 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 19:12:35.348348  719822 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 19:12:35.348438  719822 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 19:12:35.358911  719822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 19:12:35.369659  719822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 19:12:35.380029  719822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 19:12:35.390151  719822 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:35.399414  719822 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 19:12:35.409420  719822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 19:12:35.419446  719822 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 19:12:35.429337  719822 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:35.438400  719822 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:35.447043  719822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:35.538411  719822 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 19:12:35.659000  719822 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 19:12:35.659088  719822 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 19:12:35.662730  719822 start.go:563] Will wait 60s for crictl version
	I0819 19:12:35.662794  719822 ssh_runner.go:195] Run: which crictl
	I0819 19:12:35.666080  719822 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:35.705242  719822 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 19:12:35.705323  719822 ssh_runner.go:195] Run: containerd --version
	I0819 19:12:35.729073  719822 ssh_runner.go:195] Run: containerd --version
	I0819 19:12:35.756032  719822 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0819 19:12:35.758656  719822 cli_runner.go:164] Run: docker network inspect addons-764717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 19:12:35.774069  719822 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:35.777779  719822 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:35.788607  719822 kubeadm.go:883] updating cluster {Name:addons-764717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-764717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:35.788735  719822 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 19:12:35.788802  719822 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:35.825645  719822 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 19:12:35.825671  719822 containerd.go:534] Images already preloaded, skipping extraction
	I0819 19:12:35.825733  719822 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:35.861311  719822 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 19:12:35.861333  719822 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:35.861343  719822 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0819 19:12:35.861476  719822 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-764717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-764717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:35.861566  719822 ssh_runner.go:195] Run: sudo crictl info
	I0819 19:12:35.899897  719822 cni.go:84] Creating CNI manager for ""
	I0819 19:12:35.899920  719822 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 19:12:35.899930  719822 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:35.899951  719822 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-764717 NodeName:addons-764717 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:35.900082  719822 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-764717"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:35.900150  719822 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:35.908624  719822 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:35.908694  719822 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:35.917098  719822 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:12:35.934931  719822 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:35.953299  719822 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0819 19:12:35.971657  719822 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:35.975292  719822 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:35.987049  719822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:36.084639  719822 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:36.099876  719822 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717 for IP: 192.168.49.2
	I0819 19:12:36.099902  719822 certs.go:194] generating shared ca certs ...
	I0819 19:12:36.099920  719822 certs.go:226] acquiring lock for ca certs: {Name:mkead9a504602001be031927c9caa2cd7d2088f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:36.100534  719822 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-713648/.minikube/ca.key
	I0819 19:12:36.348291  719822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-713648/.minikube/ca.crt ...
	I0819 19:12:36.348325  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/ca.crt: {Name:mkcdd0c8bbe8a1341cfa935e788f1800c83f0433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:36.348521  719822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-713648/.minikube/ca.key ...
	I0819 19:12:36.348534  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/ca.key: {Name:mkce4ae3f85f5fdad67ee48033e30f491158adee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:36.348627  719822 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-713648/.minikube/proxy-client-ca.key
	I0819 19:12:37.231563  719822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-713648/.minikube/proxy-client-ca.crt ...
	I0819 19:12:37.231604  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/proxy-client-ca.crt: {Name:mk184ac9cfee04fe72202c3fea73cf8bc9a033c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:37.231819  719822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-713648/.minikube/proxy-client-ca.key ...
	I0819 19:12:37.231833  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/proxy-client-ca.key: {Name:mk13aeff98c5a17a2a4f19b316633a280a9f9682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:37.231924  719822 certs.go:256] generating profile certs ...
	I0819 19:12:37.231988  719822 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.key
	I0819 19:12:37.232004  719822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt with IP's: []
	I0819 19:12:37.505565  719822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt ...
	I0819 19:12:37.505611  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: {Name:mk338377b008c0bdf4231d1a7495a24a09b40b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:37.506217  719822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.key ...
	I0819 19:12:37.506235  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.key: {Name:mk7b422647efac56ad612fb7a9822282c4b185a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:37.506324  719822 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.key.38a1012e
	I0819 19:12:37.506350  719822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.crt.38a1012e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 19:12:37.774013  719822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.crt.38a1012e ...
	I0819 19:12:37.774047  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.crt.38a1012e: {Name:mk43b7c09504c8d8b091319891cb06697582c0c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:37.774242  719822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.key.38a1012e ...
	I0819 19:12:37.774258  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.key.38a1012e: {Name:mk8d918e24b4e08e958585476add37c4779544cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:37.774347  719822 certs.go:381] copying /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.crt.38a1012e -> /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.crt
	I0819 19:12:37.774426  719822 certs.go:385] copying /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.key.38a1012e -> /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.key
	I0819 19:12:37.774483  719822 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/proxy-client.key
	I0819 19:12:37.774503  719822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/proxy-client.crt with IP's: []
	I0819 19:12:37.925140  719822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/proxy-client.crt ...
	I0819 19:12:37.925172  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/proxy-client.crt: {Name:mk3026dc8259392755659a873039e5812d1aca0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:37.925936  719822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/proxy-client.key ...
	I0819 19:12:37.925955  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/proxy-client.key: {Name:mk9bdcd7af4a25856bf39f30e769b1dde9b61be6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:37.926150  719822 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-713648/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:37.926192  719822 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-713648/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:37.926223  719822 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-713648/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:37.926251  719822 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-713648/.minikube/certs/key.pem (1679 bytes)
	I0819 19:12:37.926837  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:37.955188  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:37.982206  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:38.009324  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:12:38.040438  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 19:12:38.069382  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:12:38.098102  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:38.124089  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:38.149289  719822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-713648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:38.175632  719822 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:38.195207  719822 ssh_runner.go:195] Run: openssl version
	I0819 19:12:38.201103  719822 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:38.211006  719822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:38.214636  719822 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 19:12 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:38.214746  719822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:38.221565  719822 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:38.231290  719822 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:38.234716  719822 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:12:38.234768  719822 kubeadm.go:392] StartCluster: {Name:addons-764717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-764717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:38.234858  719822 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:38.234920  719822 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:38.272637  719822 cri.go:89] found id: ""
	I0819 19:12:38.272767  719822 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:38.282503  719822 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:38.291975  719822 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 19:12:38.292053  719822 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:38.303494  719822 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:38.303516  719822 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:38.303602  719822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:38.314784  719822 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:38.314883  719822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:38.324212  719822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:38.334476  719822 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:38.334652  719822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:38.343604  719822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:38.353475  719822 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:38.353629  719822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:38.363152  719822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:38.373219  719822 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:38.373286  719822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:38.381816  719822 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 19:12:38.421185  719822 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:12:38.421425  719822 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:12:38.441104  719822 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 19:12:38.441187  719822 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 19:12:38.441224  719822 kubeadm.go:310] OS: Linux
	I0819 19:12:38.441272  719822 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 19:12:38.441322  719822 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 19:12:38.441371  719822 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 19:12:38.441421  719822 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 19:12:38.441473  719822 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 19:12:38.441523  719822 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 19:12:38.441580  719822 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 19:12:38.441649  719822 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 19:12:38.441703  719822 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 19:12:38.504700  719822 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:12:38.504876  719822 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:12:38.504982  719822 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:12:38.509824  719822 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:12:38.515934  719822 out.go:235]   - Generating certificates and keys ...
	I0819 19:12:38.516054  719822 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:12:38.516131  719822 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:12:38.995086  719822 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 19:12:39.139673  719822 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 19:12:39.734739  719822 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 19:12:40.054941  719822 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 19:12:40.627944  719822 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 19:12:40.628449  719822 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-764717 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 19:12:41.204862  719822 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 19:12:41.205255  719822 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-764717 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 19:12:41.620498  719822 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 19:12:42.113961  719822 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 19:12:42.323237  719822 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 19:12:42.323894  719822 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:12:42.627023  719822 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:12:42.843037  719822 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:12:44.233429  719822 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:12:44.836259  719822 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:12:45.492720  719822 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:12:45.493472  719822 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:12:45.496681  719822 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:12:45.500127  719822 out.go:235]   - Booting up control plane ...
	I0819 19:12:45.500231  719822 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:12:45.500306  719822 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:12:45.500372  719822 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:12:45.511089  719822 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:12:45.517988  719822 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:12:45.518262  719822 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:12:45.622219  719822 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:12:45.622343  719822 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:12:47.118200  719822 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502111655s
	I0819 19:12:47.118288  719822 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:12:53.619762  719822 kubeadm.go:310] [api-check] The API server is healthy after 6.501622722s
	I0819 19:12:53.640229  719822 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:12:53.668978  719822 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:12:53.701096  719822 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:12:53.701286  719822 kubeadm.go:310] [mark-control-plane] Marking the node addons-764717 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:12:53.713878  719822 kubeadm.go:310] [bootstrap-token] Using token: vbwiyt.9w2e7cty3dio5clz
	I0819 19:12:53.716580  719822 out.go:235]   - Configuring RBAC rules ...
	I0819 19:12:53.716721  719822 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:12:53.721688  719822 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:12:53.734951  719822 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:12:53.740324  719822 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:12:53.744982  719822 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:12:53.749628  719822 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:12:54.027603  719822 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:12:54.457549  719822 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:12:55.027188  719822 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:12:55.029079  719822 kubeadm.go:310] 
	I0819 19:12:55.029163  719822 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:12:55.029174  719822 kubeadm.go:310] 
	I0819 19:12:55.029250  719822 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:12:55.029259  719822 kubeadm.go:310] 
	I0819 19:12:55.029284  719822 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:12:55.029354  719822 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:12:55.029408  719822 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:12:55.029416  719822 kubeadm.go:310] 
	I0819 19:12:55.029470  719822 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:12:55.029478  719822 kubeadm.go:310] 
	I0819 19:12:55.029525  719822 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:12:55.029534  719822 kubeadm.go:310] 
	I0819 19:12:55.029585  719822 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:12:55.029713  719822 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:12:55.029789  719822 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:12:55.029797  719822 kubeadm.go:310] 
	I0819 19:12:55.029880  719822 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:12:55.029962  719822 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:12:55.029972  719822 kubeadm.go:310] 
	I0819 19:12:55.030054  719822 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vbwiyt.9w2e7cty3dio5clz \
	I0819 19:12:55.030158  719822 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa707379916665c9af57e699aaf9538c818997cdc80f78709ce47d3d08cf4d5c \
	I0819 19:12:55.030182  719822 kubeadm.go:310] 	--control-plane 
	I0819 19:12:55.030187  719822 kubeadm.go:310] 
	I0819 19:12:55.030269  719822 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:12:55.030274  719822 kubeadm.go:310] 
	I0819 19:12:55.030353  719822 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vbwiyt.9w2e7cty3dio5clz \
	I0819 19:12:55.030450  719822 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa707379916665c9af57e699aaf9538c818997cdc80f78709ce47d3d08cf4d5c 
	I0819 19:12:55.038865  719822 kubeadm.go:310] W0819 19:12:38.418039    1032 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:12:55.039154  719822 kubeadm.go:310] W0819 19:12:38.418822    1032 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:12:55.039362  719822 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0819 19:12:55.039478  719822 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:12:55.039947  719822 cni.go:84] Creating CNI manager for ""
	I0819 19:12:55.040001  719822 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 19:12:55.043268  719822 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 19:12:55.058048  719822 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 19:12:55.062901  719822 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 19:12:55.062925  719822 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 19:12:55.085322  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 19:12:55.396352  719822 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:12:55.396443  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:12:55.396482  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-764717 minikube.k8s.io/updated_at=2024_08_19T19_12_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=addons-764717 minikube.k8s.io/primary=true
	I0819 19:12:55.594119  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:12:55.594216  719822 ops.go:34] apiserver oom_adj: -16
	I0819 19:12:56.094259  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:12:56.594638  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:12:57.094527  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:12:57.594939  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:12:58.094617  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:12:58.594299  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:12:59.094452  719822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:12:59.264217  719822 kubeadm.go:1113] duration metric: took 3.867829819s to wait for elevateKubeSystemPrivileges
	I0819 19:12:59.264246  719822 kubeadm.go:394] duration metric: took 21.029480924s to StartCluster
	I0819 19:12:59.264265  719822 settings.go:142] acquiring lock: {Name:mkaadefe433938b57be9d3bfe15e882f26f4b33f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:59.264381  719822 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-713648/kubeconfig
	I0819 19:12:59.264829  719822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-713648/kubeconfig: {Name:mkf31304d026c3efc4e221dc3bed7b6fa7b4139d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:59.265458  719822 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 19:12:59.265487  719822 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 19:12:59.265864  719822 config.go:182] Loaded profile config "addons-764717": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 19:12:59.265912  719822 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 19:12:59.266035  719822 addons.go:69] Setting yakd=true in profile "addons-764717"
	I0819 19:12:59.266058  719822 addons.go:234] Setting addon yakd=true in "addons-764717"
	I0819 19:12:59.266082  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.266635  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.267139  719822 addons.go:69] Setting metrics-server=true in profile "addons-764717"
	I0819 19:12:59.267158  719822 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-764717"
	I0819 19:12:59.267181  719822 addons.go:234] Setting addon metrics-server=true in "addons-764717"
	I0819 19:12:59.267183  719822 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-764717"
	I0819 19:12:59.267215  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.267445  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.267696  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.272338  719822 addons.go:69] Setting cloud-spanner=true in profile "addons-764717"
	I0819 19:12:59.272389  719822 addons.go:234] Setting addon cloud-spanner=true in "addons-764717"
	I0819 19:12:59.272427  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.273151  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.273293  719822 addons.go:69] Setting volcano=true in profile "addons-764717"
	I0819 19:12:59.273323  719822 addons.go:234] Setting addon volcano=true in "addons-764717"
	I0819 19:12:59.273348  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.275808  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.280083  719822 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-764717"
	I0819 19:12:59.280158  719822 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-764717"
	I0819 19:12:59.280189  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.280635  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.288774  719822 addons.go:69] Setting volumesnapshots=true in profile "addons-764717"
	I0819 19:12:59.288865  719822 addons.go:234] Setting addon volumesnapshots=true in "addons-764717"
	I0819 19:12:59.288937  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.289423  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.299478  719822 addons.go:69] Setting default-storageclass=true in profile "addons-764717"
	I0819 19:12:59.299526  719822 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-764717"
	I0819 19:12:59.300087  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.300680  719822 addons.go:69] Setting gcp-auth=true in profile "addons-764717"
	I0819 19:12:59.300783  719822 mustload.go:65] Loading cluster: addons-764717
	I0819 19:12:59.300958  719822 config.go:182] Loaded profile config "addons-764717": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 19:12:59.301195  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.311378  719822 out.go:177] * Verifying Kubernetes components...
	I0819 19:12:59.311514  719822 addons.go:69] Setting ingress=true in profile "addons-764717"
	I0819 19:12:59.311560  719822 addons.go:234] Setting addon ingress=true in "addons-764717"
	I0819 19:12:59.311612  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.312118  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.334200  719822 addons.go:69] Setting ingress-dns=true in profile "addons-764717"
	I0819 19:12:59.334257  719822 addons.go:234] Setting addon ingress-dns=true in "addons-764717"
	I0819 19:12:59.334305  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.334815  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.359405  719822 addons.go:69] Setting inspektor-gadget=true in profile "addons-764717"
	I0819 19:12:59.359481  719822 addons.go:234] Setting addon inspektor-gadget=true in "addons-764717"
	I0819 19:12:59.359553  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.360251  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.267141  719822 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-764717"
	I0819 19:12:59.381650  719822 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-764717"
	I0819 19:12:59.381698  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.382209  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.267149  719822 addons.go:69] Setting registry=true in profile "addons-764717"
	I0819 19:12:59.426993  719822 addons.go:234] Setting addon registry=true in "addons-764717"
	I0819 19:12:59.427038  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.427542  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.443753  719822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:59.267154  719822 addons.go:69] Setting storage-provisioner=true in profile "addons-764717"
	I0819 19:12:59.452391  719822 addons.go:234] Setting addon storage-provisioner=true in "addons-764717"
	I0819 19:12:59.452436  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.452880  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.500365  719822 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 19:12:59.503554  719822 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 19:12:59.506746  719822 addons.go:234] Setting addon default-storageclass=true in "addons-764717"
	I0819 19:12:59.506790  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.507234  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.511107  719822 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0819 19:12:59.516450  719822 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0819 19:12:59.511331  719822 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 19:12:59.511337  719822 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 19:12:59.511591  719822 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 19:12:59.529450  719822 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 19:12:59.529629  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.511596  719822 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 19:12:59.512187  719822 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-764717"
	I0819 19:12:59.530674  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.512230  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:12:59.511577  719822 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 19:12:59.528533  719822 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 19:12:59.542671  719822 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 19:12:59.542782  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.562822  719822 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0819 19:12:59.569657  719822 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 19:12:59.569881  719822 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 19:12:59.569906  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 19:12:59.569977  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.569754  719822 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:12:59.572649  719822 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:12:59.572771  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.581963  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:12:59.601401  719822 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0819 19:12:59.601438  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0819 19:12:59.601533  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.659690  719822 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 19:12:59.669723  719822 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 19:12:59.670395  719822 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 19:12:59.678013  719822 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 19:12:59.678218  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.678013  719822 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 19:12:59.681745  719822 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 19:12:59.681766  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 19:12:59.681853  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.679169  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.682942  719822 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 19:12:59.680060  719822 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 19:12:59.681523  719822 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 19:12:59.683117  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 19:12:59.683194  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.693416  719822 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 19:12:59.696127  719822 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 19:12:59.698790  719822 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 19:12:59.701680  719822 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 19:12:59.701749  719822 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 19:12:59.701761  719822 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 19:12:59.701844  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.704693  719822 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 19:12:59.704715  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 19:12:59.704784  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.716715  719822 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 19:12:59.716745  719822 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 19:12:59.716820  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.741712  719822 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:59.742076  719822 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:59.742090  719822 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:12:59.742151  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.751725  719822 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 19:12:59.752103  719822 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:59.752126  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:12:59.752189  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.782687  719822 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 19:12:59.790024  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.790299  719822 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 19:12:59.790839  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 19:12:59.792059  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.816484  719822 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 19:12:59.819710  719822 out.go:177]   - Using image docker.io/busybox:stable
	I0819 19:12:59.822505  719822 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 19:12:59.822573  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 19:12:59.822674  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:12:59.843064  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.855900  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.885920  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.905968  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.940724  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.946310  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.954342  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.954524  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.956371  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.973355  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:12:59.982859  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:13:00.270787  719822 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.005296031s)
	I0819 19:13:00.270987  719822 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 19:13:00.271289  719822 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:00.521584  719822 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 19:13:00.521800  719822 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 19:13:00.597827  719822 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 19:13:00.597851  719822 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 19:13:00.740202  719822 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 19:13:00.740232  719822 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 19:13:00.750849  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 19:13:00.754265  719822 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:13:00.754289  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 19:13:00.776099  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 19:13:00.804472  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0819 19:13:00.825354  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 19:13:00.836114  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 19:13:00.843154  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:00.875362  719822 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 19:13:00.875439  719822 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 19:13:00.921902  719822 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 19:13:00.921967  719822 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 19:13:00.988661  719822 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 19:13:00.988735  719822 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 19:13:00.995541  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 19:13:01.051770  719822 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 19:13:01.051844  719822 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 19:13:01.056156  719822 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:13:01.056222  719822 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:13:01.128945  719822 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 19:13:01.129018  719822 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 19:13:01.181195  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:01.198374  719822 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 19:13:01.198447  719822 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 19:13:01.242364  719822 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 19:13:01.242427  719822 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 19:13:01.346065  719822 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 19:13:01.346137  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 19:13:01.424834  719822 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 19:13:01.424909  719822 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 19:13:01.432327  719822 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:01.432390  719822 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:13:01.554339  719822 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 19:13:01.554407  719822 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 19:13:01.562477  719822 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 19:13:01.562555  719822 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 19:13:01.704977  719822 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 19:13:01.705048  719822 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 19:13:01.705467  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 19:13:01.817774  719822 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 19:13:01.817804  719822 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 19:13:01.855248  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:01.874726  719822 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 19:13:01.874791  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 19:13:01.886842  719822 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 19:13:01.886916  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 19:13:01.901742  719822 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 19:13:01.901815  719822 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 19:13:01.946182  719822 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 19:13:01.946210  719822 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 19:13:02.224368  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 19:13:02.275143  719822 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 19:13:02.275170  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 19:13:02.277756  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 19:13:02.351757  719822 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 19:13:02.351785  719822 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 19:13:02.549083  719822 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 19:13:02.549112  719822 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 19:13:02.598971  719822 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 19:13:02.598999  719822 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 19:13:02.772492  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.021578468s)
	I0819 19:13:02.772433  719822 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.501108128s)
	I0819 19:13:02.772710  719822 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.501694025s)
	I0819 19:13:02.772730  719822 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 19:13:02.774731  719822 node_ready.go:35] waiting up to 6m0s for node "addons-764717" to be "Ready" ...
	I0819 19:13:02.779996  719822 node_ready.go:49] node "addons-764717" has status "Ready":"True"
	I0819 19:13:02.780019  719822 node_ready.go:38] duration metric: took 5.239263ms for node "addons-764717" to be "Ready" ...
	I0819 19:13:02.780029  719822 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:02.790113  719822 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jjj55" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:03.031109  719822 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 19:13:03.031144  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 19:13:03.096964  719822 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 19:13:03.096995  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 19:13:03.273339  719822 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 19:13:03.273373  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 19:13:03.277346  719822 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-764717" context rescaled to 1 replicas
	I0819 19:13:03.333098  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 19:13:03.524336  719822 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 19:13:03.524380  719822 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 19:13:03.822729  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 19:13:04.735696  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.959511714s)
	I0819 19:13:04.800020  719822 pod_ready.go:103] pod "coredns-6f6b679f8f-jjj55" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.828031  719822 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 19:13:06.828181  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:13:06.839174  719822 pod_ready.go:103] pod "coredns-6f6b679f8f-jjj55" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.862460  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:13:07.512363  719822 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 19:13:07.631998  719822 addons.go:234] Setting addon gcp-auth=true in "addons-764717"
	I0819 19:13:07.632065  719822 host.go:66] Checking if "addons-764717" exists ...
	I0819 19:13:07.632578  719822 cli_runner.go:164] Run: docker container inspect addons-764717 --format={{.State.Status}}
	I0819 19:13:07.667547  719822 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 19:13:07.667606  719822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-764717
	I0819 19:13:07.691293  719822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/addons-764717/id_rsa Username:docker}
	I0819 19:13:08.859205  719822 pod_ready.go:103] pod "coredns-6f6b679f8f-jjj55" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:10.117847  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.313341283s)
	I0819 19:13:10.117998  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.292625681s)
	I0819 19:13:10.118522  719822 addons.go:475] Verifying addon ingress=true in "addons-764717"
	I0819 19:13:10.118688  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.936904696s)
	I0819 19:13:10.118054  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.281878449s)
	I0819 19:13:10.118107  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.274887496s)
	I0819 19:13:10.118140  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.122532925s)
	I0819 19:13:10.118194  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.41268412s)
	I0819 19:13:10.119003  719822 addons.go:475] Verifying addon registry=true in "addons-764717"
	I0819 19:13:10.118247  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.262925734s)
	I0819 19:13:10.119070  719822 addons.go:475] Verifying addon metrics-server=true in "addons-764717"
	I0819 19:13:10.118276  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.893882087s)
	I0819 19:13:10.118348  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.840565043s)
	W0819 19:13:10.119257  719822 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 19:13:10.119281  719822 retry.go:31] will retry after 173.566509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 19:13:10.118404  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.785277475s)
	I0819 19:13:10.122052  719822 out.go:177] * Verifying ingress addon...
	I0819 19:13:10.123990  719822 out.go:177] * Verifying registry addon...
	I0819 19:13:10.124100  719822 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-764717 service yakd-dashboard -n yakd-dashboard
	
	I0819 19:13:10.127984  719822 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 19:13:10.128991  719822 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 19:13:10.179694  719822 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 19:13:10.179723  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:10.180685  719822 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 19:13:10.180708  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:10.293389  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 19:13:10.644402  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:10.646943  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:10.788644  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.965846468s)
	I0819 19:13:10.788681  719822 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-764717"
	I0819 19:13:10.788993  719822 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.121421922s)
	I0819 19:13:10.791858  719822 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 19:13:10.791865  719822 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 19:13:10.794722  719822 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 19:13:10.795563  719822 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 19:13:10.797494  719822 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 19:13:10.797519  719822 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 19:13:10.810577  719822 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 19:13:10.811109  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:10.876954  719822 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 19:13:10.876976  719822 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 19:13:10.942693  719822 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 19:13:10.942715  719822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 19:13:11.014782  719822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 19:13:11.135083  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:11.136144  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:11.317671  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:11.318857  719822 pod_ready.go:103] pod "coredns-6f6b679f8f-jjj55" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.633244  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:11.635608  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:11.802935  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:12.126005  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.832553252s)
	I0819 19:13:12.126100  719822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.111285199s)
	I0819 19:13:12.129521  719822 addons.go:475] Verifying addon gcp-auth=true in "addons-764717"
	I0819 19:13:12.133714  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:12.133878  719822 out.go:177] * Verifying gcp-auth addon...
	I0819 19:13:12.135766  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:12.137671  719822 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 19:13:12.140646  719822 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 19:13:12.303383  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:12.634282  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:12.635161  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:12.800707  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:13.135617  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:13.137244  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:13.302464  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:13.636544  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:13.638268  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:13.797360  719822 pod_ready.go:103] pod "coredns-6f6b679f8f-jjj55" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:13.801857  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:14.135501  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:14.137416  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:14.301281  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:14.635099  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:14.636949  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:14.812695  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:15.136907  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:15.142526  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:15.303586  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:15.634079  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:15.634771  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:15.796238  719822 pod_ready.go:93] pod "coredns-6f6b679f8f-jjj55" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:15.796264  719822 pod_ready.go:82] duration metric: took 13.006086793s for pod "coredns-6f6b679f8f-jjj55" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:15.796275  719822 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-prjrm" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:15.798491  719822 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-prjrm" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-prjrm" not found
	I0819 19:13:15.798519  719822 pod_ready.go:82] duration metric: took 2.235688ms for pod "coredns-6f6b679f8f-prjrm" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:15.798531  719822 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-prjrm" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-prjrm" not found
	I0819 19:13:15.798538  719822 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-764717" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:15.800391  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:15.805701  719822 pod_ready.go:93] pod "etcd-addons-764717" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:15.805728  719822 pod_ready.go:82] duration metric: took 7.182007ms for pod "etcd-addons-764717" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:15.805743  719822 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-764717" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:15.810902  719822 pod_ready.go:93] pod "kube-apiserver-addons-764717" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:15.810929  719822 pod_ready.go:82] duration metric: took 5.178349ms for pod "kube-apiserver-addons-764717" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:15.810941  719822 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-764717" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:15.816817  719822 pod_ready.go:93] pod "kube-controller-manager-addons-764717" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:15.816843  719822 pod_ready.go:82] duration metric: took 5.893493ms for pod "kube-controller-manager-addons-764717" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:15.816855  719822 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ffzf6" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:15.993401  719822 pod_ready.go:93] pod "kube-proxy-ffzf6" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:15.993426  719822 pod_ready.go:82] duration metric: took 176.562358ms for pod "kube-proxy-ffzf6" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:15.993437  719822 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-764717" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:16.133634  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:16.134965  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:16.301406  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:16.394305  719822 pod_ready.go:93] pod "kube-scheduler-addons-764717" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:16.394332  719822 pod_ready.go:82] duration metric: took 400.885351ms for pod "kube-scheduler-addons-764717" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:16.394345  719822 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kvhw5" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:16.634886  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:16.636112  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:16.801807  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:17.134367  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:17.135696  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:17.301960  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:17.634011  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:17.634536  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:17.800725  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:18.134114  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:18.134693  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:18.300791  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:18.403861  719822 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-kvhw5" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:18.633860  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:18.635668  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:18.802335  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:19.133269  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:19.136148  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:19.301726  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:19.642202  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:19.643385  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:19.801359  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:20.135085  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:20.136578  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:20.301892  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:20.631915  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:20.634073  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:20.800236  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:20.899860  719822 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-kvhw5" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:21.137694  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:21.139267  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:21.301126  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:21.635756  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:21.637446  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:21.801659  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:22.134412  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:22.134927  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:22.309891  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:22.634815  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:22.635765  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:22.802817  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:22.900536  719822 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-kvhw5" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:23.133697  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:23.134252  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:23.301288  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:23.633898  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:23.636001  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:23.801375  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:23.901456  719822 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-kvhw5" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:23.901482  719822 pod_ready.go:82] duration metric: took 7.507128438s for pod "nvidia-device-plugin-daemonset-kvhw5" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:23.901493  719822 pod_ready.go:39] duration metric: took 21.121452568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:23.901527  719822 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:23.901634  719822 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.919583  719822 api_server.go:72] duration metric: took 24.654062117s to wait for apiserver process to appear ...
	I0819 19:13:23.919609  719822 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:13:23.919630  719822 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 19:13:23.927544  719822 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 19:13:23.928602  719822 api_server.go:141] control plane version: v1.31.0
	I0819 19:13:23.928632  719822 api_server.go:131] duration metric: took 9.014597ms to wait for apiserver health ...
	I0819 19:13:23.928642  719822 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:13:23.938355  719822 system_pods.go:59] 18 kube-system pods found
	I0819 19:13:23.938398  719822 system_pods.go:61] "coredns-6f6b679f8f-jjj55" [ad3228fb-a563-43d5-bf52-97f3239d4a26] Running
	I0819 19:13:23.938409  719822 system_pods.go:61] "csi-hostpath-attacher-0" [736a6b32-794c-4743-9f49-c08a81e9c3af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 19:13:23.938417  719822 system_pods.go:61] "csi-hostpath-resizer-0" [d3556d5c-c6c5-44d2-84b5-a000cdc39404] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 19:13:23.938427  719822 system_pods.go:61] "csi-hostpathplugin-8k72f" [1ec2478d-15f8-4a48-8f64-b7aae37b9a80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 19:13:23.938435  719822 system_pods.go:61] "etcd-addons-764717" [c538cfd4-25f5-47df-9f46-bfc19a32be9a] Running
	I0819 19:13:23.938440  719822 system_pods.go:61] "kindnet-p7cbv" [0db82ada-723e-4095-8a5a-d6e4ec771e46] Running
	I0819 19:13:23.938451  719822 system_pods.go:61] "kube-apiserver-addons-764717" [3b5686a0-e208-482b-b693-e5f4c4473e39] Running
	I0819 19:13:23.938456  719822 system_pods.go:61] "kube-controller-manager-addons-764717" [5173bef4-72f2-4584-bf69-4a7fc1370989] Running
	I0819 19:13:23.938461  719822 system_pods.go:61] "kube-ingress-dns-minikube" [3ec67196-fb67-4b2a-809c-872db39d19dd] Running
	I0819 19:13:23.938471  719822 system_pods.go:61] "kube-proxy-ffzf6" [80b742ae-0eb1-43ab-890a-94d06e906770] Running
	I0819 19:13:23.938476  719822 system_pods.go:61] "kube-scheduler-addons-764717" [cca7bc5c-fe80-464c-a78b-e6d2766a8f3d] Running
	I0819 19:13:23.938481  719822 system_pods.go:61] "metrics-server-8988944d9-z27w8" [02399b78-3ca4-4bba-bfcf-3a75829d8cd2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:13:23.938486  719822 system_pods.go:61] "nvidia-device-plugin-daemonset-kvhw5" [7ace71d8-efe9-4b5c-92a1-a84980af040a] Running
	I0819 19:13:23.938492  719822 system_pods.go:61] "registry-6fb4cdfc84-mdlgd" [e16db8aa-91f9-43cd-aa8f-55fb34b58974] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0819 19:13:23.938501  719822 system_pods.go:61] "registry-proxy-95ls7" [4f8e0022-33fe-4566-87c6-b25e8349fbbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 19:13:23.938508  719822 system_pods.go:61] "snapshot-controller-56fcc65765-khfcv" [1a40e624-1896-4fa2-94f6-933045ea4c2d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 19:13:23.938519  719822 system_pods.go:61] "snapshot-controller-56fcc65765-vv4tn" [665c3535-a0b3-4398-807f-ecdb47ac6bc4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 19:13:23.938524  719822 system_pods.go:61] "storage-provisioner" [a5bcff9e-e914-4de2-9740-ceb293ce7618] Running
	I0819 19:13:23.938530  719822 system_pods.go:74] duration metric: took 9.882716ms to wait for pod list to return data ...
	I0819 19:13:23.938542  719822 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:13:23.941455  719822 default_sa.go:45] found service account: "default"
	I0819 19:13:23.941483  719822 default_sa.go:55] duration metric: took 2.933061ms for default service account to be created ...
	I0819 19:13:23.941494  719822 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:13:23.950694  719822 system_pods.go:86] 18 kube-system pods found
	I0819 19:13:23.950735  719822 system_pods.go:89] "coredns-6f6b679f8f-jjj55" [ad3228fb-a563-43d5-bf52-97f3239d4a26] Running
	I0819 19:13:23.950746  719822 system_pods.go:89] "csi-hostpath-attacher-0" [736a6b32-794c-4743-9f49-c08a81e9c3af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 19:13:23.950755  719822 system_pods.go:89] "csi-hostpath-resizer-0" [d3556d5c-c6c5-44d2-84b5-a000cdc39404] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 19:13:23.950764  719822 system_pods.go:89] "csi-hostpathplugin-8k72f" [1ec2478d-15f8-4a48-8f64-b7aae37b9a80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 19:13:23.950769  719822 system_pods.go:89] "etcd-addons-764717" [c538cfd4-25f5-47df-9f46-bfc19a32be9a] Running
	I0819 19:13:23.950774  719822 system_pods.go:89] "kindnet-p7cbv" [0db82ada-723e-4095-8a5a-d6e4ec771e46] Running
	I0819 19:13:23.950783  719822 system_pods.go:89] "kube-apiserver-addons-764717" [3b5686a0-e208-482b-b693-e5f4c4473e39] Running
	I0819 19:13:23.950788  719822 system_pods.go:89] "kube-controller-manager-addons-764717" [5173bef4-72f2-4584-bf69-4a7fc1370989] Running
	I0819 19:13:23.950797  719822 system_pods.go:89] "kube-ingress-dns-minikube" [3ec67196-fb67-4b2a-809c-872db39d19dd] Running
	I0819 19:13:23.950802  719822 system_pods.go:89] "kube-proxy-ffzf6" [80b742ae-0eb1-43ab-890a-94d06e906770] Running
	I0819 19:13:23.950807  719822 system_pods.go:89] "kube-scheduler-addons-764717" [cca7bc5c-fe80-464c-a78b-e6d2766a8f3d] Running
	I0819 19:13:23.950818  719822 system_pods.go:89] "metrics-server-8988944d9-z27w8" [02399b78-3ca4-4bba-bfcf-3a75829d8cd2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:13:23.950824  719822 system_pods.go:89] "nvidia-device-plugin-daemonset-kvhw5" [7ace71d8-efe9-4b5c-92a1-a84980af040a] Running
	I0819 19:13:23.950834  719822 system_pods.go:89] "registry-6fb4cdfc84-mdlgd" [e16db8aa-91f9-43cd-aa8f-55fb34b58974] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0819 19:13:23.950841  719822 system_pods.go:89] "registry-proxy-95ls7" [4f8e0022-33fe-4566-87c6-b25e8349fbbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 19:13:23.950849  719822 system_pods.go:89] "snapshot-controller-56fcc65765-khfcv" [1a40e624-1896-4fa2-94f6-933045ea4c2d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 19:13:23.950859  719822 system_pods.go:89] "snapshot-controller-56fcc65765-vv4tn" [665c3535-a0b3-4398-807f-ecdb47ac6bc4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 19:13:23.950864  719822 system_pods.go:89] "storage-provisioner" [a5bcff9e-e914-4de2-9740-ceb293ce7618] Running
	I0819 19:13:23.950872  719822 system_pods.go:126] duration metric: took 9.371878ms to wait for k8s-apps to be running ...
	I0819 19:13:23.950882  719822 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:13:23.950947  719822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:23.966548  719822 system_svc.go:56] duration metric: took 15.654898ms WaitForService to wait for kubelet
	I0819 19:13:23.966577  719822 kubeadm.go:582] duration metric: took 24.701061592s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:13:23.966598  719822 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:13:23.970793  719822 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 19:13:23.970831  719822 node_conditions.go:123] node cpu capacity is 2
	I0819 19:13:23.970850  719822 node_conditions.go:105] duration metric: took 4.244638ms to run NodePressure ...
	I0819 19:13:23.970863  719822 start.go:241] waiting for startup goroutines ...
	I0819 19:13:24.133071  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:24.133216  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:24.300849  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:24.633931  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:24.634417  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:24.800335  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:25.132208  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:25.134674  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:25.302187  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:25.632952  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:25.635726  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:25.801902  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:26.133407  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:26.134070  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:26.300011  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:26.633249  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:26.634023  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:26.801825  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:27.133974  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:27.135408  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:27.301139  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:27.631450  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:27.635146  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:27.801537  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:28.136607  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:28.138865  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:28.305565  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:28.657639  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:28.659390  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:28.824259  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:29.131635  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:29.134163  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:29.301051  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:29.634614  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:29.635918  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:29.801632  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:30.132488  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:30.135838  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:30.301117  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:30.634171  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:30.634535  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:30.800286  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:31.134184  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:31.134688  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:31.304387  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:31.637149  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:31.638199  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:31.801308  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:32.135077  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:32.136668  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:32.300600  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:32.633829  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:32.634014  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:32.801869  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:33.133478  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:33.134602  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:33.301995  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:33.634543  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:33.635131  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:33.800456  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:34.137414  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:34.138828  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:34.300955  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:34.631549  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:34.634270  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:34.804873  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:35.136324  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:35.137505  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:35.304648  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:35.633881  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:35.637119  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:35.814439  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:36.137089  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:36.138481  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:36.310829  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:36.639002  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:36.640109  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:36.804402  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:37.134787  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:37.137117  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:37.302992  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:37.637761  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:37.639144  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 19:13:37.801541  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:38.134021  719822 kapi.go:107] duration metric: took 28.006036724s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 19:13:38.135286  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:38.300753  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:38.633027  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:38.800593  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:39.135700  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:39.301240  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:39.635599  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:39.801962  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:40.138451  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:40.302923  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:40.642967  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:40.801646  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:41.134076  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:41.304158  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:41.633853  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:41.801289  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:42.135509  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:42.301572  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:42.633435  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:42.800767  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:43.133646  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:43.301358  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:43.634386  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:43.800994  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:44.135286  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:44.300381  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:44.635465  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:44.800565  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:45.140447  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:45.302435  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:45.633705  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:45.806692  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:46.134147  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:46.300548  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:46.633844  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:46.801660  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:47.134300  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:47.301377  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:47.633814  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:47.801230  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:48.134107  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:48.301225  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:48.633716  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:48.801945  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:49.134000  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:49.301849  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:49.634065  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:49.806191  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:50.133823  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:50.301551  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:50.634073  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:50.800573  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:51.138390  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:51.300614  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:51.633550  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:51.801056  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:52.135049  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:52.301793  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:52.634752  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:52.800734  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:53.134982  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:53.300983  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:53.641687  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:53.837381  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:54.135696  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:54.300919  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:54.634086  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:54.801072  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:55.137436  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:55.301848  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:55.634422  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:55.802983  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:56.137772  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:56.301081  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:56.634017  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:56.801333  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:57.133810  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:57.300843  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:57.634127  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:57.800863  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:58.135605  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:58.302099  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:58.634277  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:58.800608  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:59.134487  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:59.301059  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:13:59.634849  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:13:59.801376  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 19:14:00.185664  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:00.385162  719822 kapi.go:107] duration metric: took 49.589590192s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 19:14:00.634207  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:01.134447  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:01.634067  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:02.136311  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:02.633911  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:03.133924  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:03.633547  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:04.133817  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:04.634161  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:05.134010  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:05.633491  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:06.133351  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:06.634109  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:07.134008  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:07.634146  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:08.133330  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:08.633047  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:09.133678  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:09.634000  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:10.133118  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:10.633990  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:11.134233  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:11.633292  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:12.133758  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:12.633675  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:13.133994  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:13.634739  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:14.141964  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:14.633416  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:15.134388  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:15.634283  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:16.133512  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:16.633988  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:17.133971  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:17.634553  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:18.133776  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:18.634333  719822 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 19:14:19.154216  719822 kapi.go:107] duration metric: took 1m9.025213434s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 19:14:35.141894  719822 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 19:14:35.141919  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 19:14:35.641399  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 19:14:36.142397  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 19:14:36.640714  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 19:14:37.142229  719822 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 19:14:37.642237  719822 kapi.go:107] duration metric: took 1m25.504566394s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 19:14:37.644485  719822 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-764717 cluster.
	I0819 19:14:37.647182  719822 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 19:14:37.649023  719822 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 19:14:37.651145  719822 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner-rancher, volcano, ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0819 19:14:37.653241  719822 addons.go:510] duration metric: took 1m38.387319214s for enable addons: enabled=[cloud-spanner storage-provisioner-rancher volcano ingress-dns storage-provisioner nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0819 19:14:37.653292  719822 start.go:246] waiting for cluster config update ...
	I0819 19:14:37.653318  719822 start.go:255] writing updated cluster config ...
	I0819 19:14:37.653669  719822 ssh_runner.go:195] Run: rm -f paused
	I0819 19:14:37.994870  719822 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:14:37.998106  719822 out.go:177] * Done! kubectl is now configured to use "addons-764717" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	54bbac5fa3417       e2d3313f65753       About a minute ago   Exited              gadget                                   5                   35005be3bf983       gadget-hhx62
	9c0ad0ca1685a       6ef582f3ec844       3 minutes ago        Running             gcp-auth                                 0                   2a9f02159e3be       gcp-auth-89d5ffd79-r2bmj
	a71254b93cf54       289a818c8d9c5       3 minutes ago        Running             controller                               0                   4100d8bc14ff7       ingress-nginx-controller-bc57996ff-mxrp6
	631e39b3e5aff       ee6d597e62dc8       3 minutes ago        Running             csi-snapshotter                          0                   807c13738fe15       csi-hostpathplugin-8k72f
	c4958236d119c       642ded511e141       3 minutes ago        Running             csi-provisioner                          0                   807c13738fe15       csi-hostpathplugin-8k72f
	afe7c03f92725       922312104da8a       4 minutes ago        Running             liveness-probe                           0                   807c13738fe15       csi-hostpathplugin-8k72f
	0232247ff48c3       08f6b2990811a       4 minutes ago        Running             hostpath                                 0                   807c13738fe15       csi-hostpathplugin-8k72f
	67455b51ad68f       0107d56dbc0be       4 minutes ago        Running             node-driver-registrar                    0                   807c13738fe15       csi-hostpathplugin-8k72f
	33af6215c37b5       8b46b1cd48760       4 minutes ago        Running             admission                                0                   949d96875181b       volcano-admission-77d7d48b68-kmj97
	0e79ebf33a62f       9a80d518f102c       4 minutes ago        Running             csi-attacher                             0                   bb0951d1aad48       csi-hostpath-attacher-0
	b17dcb5c9683f       d9c7ad4c226bf       4 minutes ago        Running             volcano-scheduler                        0                   572dcc126a33a       volcano-scheduler-576bc46687-bbv4d
	8650528c15bc4       420193b27261a       4 minutes ago        Exited              patch                                    0                   352b9de484df3       ingress-nginx-admission-patch-nds9b
	3b8dc64fb2c2e       1461903ec4fe9       4 minutes ago        Running             csi-external-health-monitor-controller   0                   807c13738fe15       csi-hostpathplugin-8k72f
	9a16362b2aa32       487fa743e1e22       4 minutes ago        Running             csi-resizer                              0                   ce7fd3f91ce79       csi-hostpath-resizer-0
	5f15faabb7439       420193b27261a       4 minutes ago        Exited              create                                   0                   e2b3b7313a55f       ingress-nginx-admission-create-sfzxn
	f155eeae6c5c0       4d1e5c3e97420       4 minutes ago        Running             volume-snapshot-controller               0                   82091d1af5158       snapshot-controller-56fcc65765-khfcv
	06bf465249116       1505f556b3a7b       4 minutes ago        Running             volcano-controllers                      0                   dafb5a1f11f57       volcano-controllers-56675bb4d5-2pcjc
	741b5f3b697ad       77bdba588b953       4 minutes ago        Running             yakd                                     0                   74c1e8d3e9cd5       yakd-dashboard-67d98fc6b-xmh9p
	985a4c502d328       6fed88f43b276       4 minutes ago        Running             registry                                 0                   e6db8f6d9bcbb       registry-6fb4cdfc84-mdlgd
	3a366e3785219       95dccb4df54ab       4 minutes ago        Running             metrics-server                           0                   fd711e5336261       metrics-server-8988944d9-z27w8
	bc6a3a1f03220       4d1e5c3e97420       4 minutes ago        Running             volume-snapshot-controller               0                   462291bbcaead       snapshot-controller-56fcc65765-vv4tn
	14c884fdff117       3410e1561990a       4 minutes ago        Running             registry-proxy                           0                   f940471dad142       registry-proxy-95ls7
	b9fbed6dc04f2       53af6e2c4c343       4 minutes ago        Running             cloud-spanner-emulator                   0                   2413be816a943       cloud-spanner-emulator-c4bc9b5f8-sp82d
	224496d02463e       7ce2150c8929b       4 minutes ago        Running             local-path-provisioner                   0                   122df1ba0d1c1       local-path-provisioner-86d989889c-j9b8s
	ebb1b741915f8       a9bac31a5be8d       4 minutes ago        Running             nvidia-device-plugin-ctr                 0                   2a56650334004       nvidia-device-plugin-daemonset-kvhw5
	5ea8442f13eb8       35508c2f890c4       4 minutes ago        Running             minikube-ingress-dns                     0                   41ba2ab653398       kube-ingress-dns-minikube
	4d7d7d7bf7f77       2437cf7621777       4 minutes ago        Running             coredns                                  0                   c3b4a11182a1e       coredns-6f6b679f8f-jjj55
	92e2ec280c809       ba04bb24b9575       4 minutes ago        Running             storage-provisioner                      0                   fdcd5ca8dd602       storage-provisioner
	19078c79e63a8       6a23fa8fd2b78       4 minutes ago        Running             kindnet-cni                              0                   99fbc71ddbe1e       kindnet-p7cbv
	7a521c239c768       71d55d66fd4ee       4 minutes ago        Running             kube-proxy                               0                   3b770d2f472d7       kube-proxy-ffzf6
	9859f5ff72b35       fbbbd428abb4d       5 minutes ago        Running             kube-scheduler                           0                   78d4024729175       kube-scheduler-addons-764717
	9bc55c3ab6b21       27e3830e14027       5 minutes ago        Running             etcd                                     0                   de91c09959f80       etcd-addons-764717
	792c24c65076c       fcb0683e6bdbd       5 minutes ago        Running             kube-controller-manager                  0                   62536d8b7ca8c       kube-controller-manager-addons-764717
	df82c63c0408e       cd0f0ae0ec9e0       5 minutes ago        Running             kube-apiserver                           0                   a621567218fee       kube-apiserver-addons-764717
	
	
	==> containerd <==
	Aug 19 19:15:10 addons-764717 containerd[822]: time="2024-08-19T19:15:10.282398737Z" level=info msg="RemoveContainer for \"3ee5fcf6267d79e9e637e6994aee40cf372ef4d3b21cfb038c32bf20b58276d5\" returns successfully"
	Aug 19 19:15:54 addons-764717 containerd[822]: time="2024-08-19T19:15:54.469425549Z" level=info msg="RemoveContainer for \"a2048a0e35063c70ff47fbb4d0817d22b280ad1c18b399b7ad7bf93f0718186c\""
	Aug 19 19:15:54 addons-764717 containerd[822]: time="2024-08-19T19:15:54.476342724Z" level=info msg="RemoveContainer for \"a2048a0e35063c70ff47fbb4d0817d22b280ad1c18b399b7ad7bf93f0718186c\" returns successfully"
	Aug 19 19:15:54 addons-764717 containerd[822]: time="2024-08-19T19:15:54.478515432Z" level=info msg="StopPodSandbox for \"3bbd087458e2ab6227cb05587477f11c860a4defd7553468018ce018b4669d6d\""
	Aug 19 19:15:54 addons-764717 containerd[822]: time="2024-08-19T19:15:54.487577170Z" level=info msg="TearDown network for sandbox \"3bbd087458e2ab6227cb05587477f11c860a4defd7553468018ce018b4669d6d\" successfully"
	Aug 19 19:15:54 addons-764717 containerd[822]: time="2024-08-19T19:15:54.487752430Z" level=info msg="StopPodSandbox for \"3bbd087458e2ab6227cb05587477f11c860a4defd7553468018ce018b4669d6d\" returns successfully"
	Aug 19 19:15:54 addons-764717 containerd[822]: time="2024-08-19T19:15:54.489009412Z" level=info msg="RemovePodSandbox for \"3bbd087458e2ab6227cb05587477f11c860a4defd7553468018ce018b4669d6d\""
	Aug 19 19:15:54 addons-764717 containerd[822]: time="2024-08-19T19:15:54.489062072Z" level=info msg="Forcibly stopping sandbox \"3bbd087458e2ab6227cb05587477f11c860a4defd7553468018ce018b4669d6d\""
	Aug 19 19:15:54 addons-764717 containerd[822]: time="2024-08-19T19:15:54.497048637Z" level=info msg="TearDown network for sandbox \"3bbd087458e2ab6227cb05587477f11c860a4defd7553468018ce018b4669d6d\" successfully"
	Aug 19 19:15:54 addons-764717 containerd[822]: time="2024-08-19T19:15:54.504934968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bbd087458e2ab6227cb05587477f11c860a4defd7553468018ce018b4669d6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 19 19:15:54 addons-764717 containerd[822]: time="2024-08-19T19:15:54.505072116Z" level=info msg="RemovePodSandbox \"3bbd087458e2ab6227cb05587477f11c860a4defd7553468018ce018b4669d6d\" returns successfully"
	Aug 19 19:16:30 addons-764717 containerd[822]: time="2024-08-19T19:16:30.400280273Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\""
	Aug 19 19:16:30 addons-764717 containerd[822]: time="2024-08-19T19:16:30.525019409Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 19:16:30 addons-764717 containerd[822]: time="2024-08-19T19:16:30.527239295Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: active requests=0, bytes read=89"
	Aug 19 19:16:30 addons-764717 containerd[822]: time="2024-08-19T19:16:30.530819835Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" with image id \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\", size \"69907666\" in 130.483046ms"
	Aug 19 19:16:30 addons-764717 containerd[822]: time="2024-08-19T19:16:30.531006204Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" returns image reference \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\""
	Aug 19 19:16:30 addons-764717 containerd[822]: time="2024-08-19T19:16:30.533193394Z" level=info msg="CreateContainer within sandbox \"35005be3bf983e20829c0d1fd864f9a9b405c0356f76e1ea270f417545a279c8\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 19 19:16:30 addons-764717 containerd[822]: time="2024-08-19T19:16:30.552926426Z" level=info msg="CreateContainer within sandbox \"35005be3bf983e20829c0d1fd864f9a9b405c0356f76e1ea270f417545a279c8\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85\""
	Aug 19 19:16:30 addons-764717 containerd[822]: time="2024-08-19T19:16:30.553762489Z" level=info msg="StartContainer for \"54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85\""
	Aug 19 19:16:30 addons-764717 containerd[822]: time="2024-08-19T19:16:30.608351599Z" level=info msg="StartContainer for \"54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85\" returns successfully"
	Aug 19 19:16:31 addons-764717 containerd[822]: time="2024-08-19T19:16:31.905295890Z" level=info msg="shim disconnected" id=54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85 namespace=k8s.io
	Aug 19 19:16:31 addons-764717 containerd[822]: time="2024-08-19T19:16:31.905359455Z" level=warning msg="cleaning up after shim disconnected" id=54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85 namespace=k8s.io
	Aug 19 19:16:31 addons-764717 containerd[822]: time="2024-08-19T19:16:31.905369580Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 19:16:32 addons-764717 containerd[822]: time="2024-08-19T19:16:32.529396920Z" level=info msg="RemoveContainer for \"c1daa53c5907bfccb3e5cbe45e2c5463cf7b6bc57788c27351f3edf4a702139b\""
	Aug 19 19:16:32 addons-764717 containerd[822]: time="2024-08-19T19:16:32.536517490Z" level=info msg="RemoveContainer for \"c1daa53c5907bfccb3e5cbe45e2c5463cf7b6bc57788c27351f3edf4a702139b\" returns successfully"
	
	
	==> coredns [4d7d7d7bf7f777a14f84d05997cd89b11923e6c54ec5b8f24bf684644f0d9cfb] <==
	[INFO] 10.244.0.6:34353 - 32254 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049837s
	[INFO] 10.244.0.6:53540 - 9585 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001268493s
	[INFO] 10.244.0.6:53540 - 13132 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001709269s
	[INFO] 10.244.0.6:43611 - 2271 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072492s
	[INFO] 10.244.0.6:43611 - 46800 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000040336s
	[INFO] 10.244.0.6:57157 - 62079 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104073s
	[INFO] 10.244.0.6:57157 - 16251 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000041206s
	[INFO] 10.244.0.6:43437 - 1227 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062137s
	[INFO] 10.244.0.6:43437 - 11721 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003968s
	[INFO] 10.244.0.6:36769 - 51858 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079925s
	[INFO] 10.244.0.6:36769 - 44176 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035298s
	[INFO] 10.244.0.6:45130 - 349 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004525375s
	[INFO] 10.244.0.6:45130 - 4699 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004350943s
	[INFO] 10.244.0.6:60243 - 25351 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000092208s
	[INFO] 10.244.0.6:60243 - 53765 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133037s
	[INFO] 10.244.0.24:41913 - 54643 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002047727s
	[INFO] 10.244.0.24:59562 - 29270 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001633979s
	[INFO] 10.244.0.24:36517 - 24961 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175005s
	[INFO] 10.244.0.24:54438 - 49582 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068168s
	[INFO] 10.244.0.24:55672 - 59947 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115929s
	[INFO] 10.244.0.24:58838 - 3410 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000078654s
	[INFO] 10.244.0.24:45525 - 19045 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00260266s
	[INFO] 10.244.0.24:48927 - 47988 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002191759s
	[INFO] 10.244.0.24:45490 - 57748 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001046874s
	[INFO] 10.244.0.24:45382 - 13901 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001361982s
	
	
	==> describe nodes <==
	Name:               addons-764717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-764717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=addons-764717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_12_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-764717
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-764717"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:12:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-764717
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:17:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:14:57 +0000   Mon, 19 Aug 2024 19:12:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:14:57 +0000   Mon, 19 Aug 2024 19:12:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:14:57 +0000   Mon, 19 Aug 2024 19:12:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:14:57 +0000   Mon, 19 Aug 2024 19:12:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-764717
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 dec7470a8d254b009288db96caa2080c
	  System UUID:                a62f7dfa-a4b7-4e8d-9913-2a9ceb562e58
	  Boot ID:                    6e682a37-9512-4f3a-882d-7e45a79a9483
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-sp82d      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  gadget                      gadget-hhx62                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  gcp-auth                    gcp-auth-89d5ffd79-r2bmj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-mxrp6    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m48s
	  kube-system                 coredns-6f6b679f8f-jjj55                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m57s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpathplugin-8k72f                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 etcd-addons-764717                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m2s
	  kube-system                 kindnet-p7cbv                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m58s
	  kube-system                 kube-apiserver-addons-764717                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-controller-manager-addons-764717       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-ffzf6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-scheduler-addons-764717                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 metrics-server-8988944d9-z27w8              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m51s
	  kube-system                 nvidia-device-plugin-daemonset-kvhw5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 registry-6fb4cdfc84-mdlgd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-proxy-95ls7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 snapshot-controller-56fcc65765-khfcv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-56fcc65765-vv4tn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  local-path-storage          local-path-provisioner-86d989889c-j9b8s     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  volcano-system              volcano-admission-77d7d48b68-kmj97          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  volcano-system              volcano-controllers-56675bb4d5-2pcjc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  volcano-system              volcano-scheduler-576bc46687-bbv4d          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-xmh9p              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m56s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node addons-764717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m10s (x7 over 5m10s)  kubelet          Node addons-764717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m10s (x7 over 5m10s)  kubelet          Node addons-764717 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 5m2s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m2s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  5m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m2s                   kubelet          Node addons-764717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m2s                   kubelet          Node addons-764717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m2s                   kubelet          Node addons-764717 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m58s                  node-controller  Node addons-764717 event: Registered Node addons-764717 in Controller
	
	
	==> dmesg <==
	[Aug19 18:13] systemd-journald[216]: Failed to send stream file descriptor to service manager: Connection refused
	[Aug19 18:44] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [9bc55c3ab6b21e5549e1ba1269adb8b921905a3c94c6d170d70e4eb031132e69] <==
	{"level":"info","ts":"2024-08-19T19:12:47.722579Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:12:47.722648Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T19:12:47.722658Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T19:12:47.730473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-08-19T19:12:47.733696Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-08-19T19:12:47.985632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T19:12:47.985838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T19:12:47.985935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-19T19:12:47.986042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T19:12:47.986117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T19:12:47.986209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-19T19:12:47.986277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T19:12:47.993802Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-764717 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:12:47.995820Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:12:47.996064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:12:47.996446Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:12:47.997040Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:12:47.997140Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T19:12:47.997263Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:12:47.997455Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:12:47.997573Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:12:47.998070Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:12:47.999133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T19:12:47.999935Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:12:48.006588Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [9c0ad0ca1685a01ce9ee71924b42e74e8cf57629057346b900a5b323759fa121] <==
	2024/08/19 19:14:36 GCP Auth Webhook started!
	2024/08/19 19:14:54 Ready to marshal response ...
	2024/08/19 19:14:54 Ready to write response ...
	2024/08/19 19:14:55 Ready to marshal response ...
	2024/08/19 19:14:55 Ready to write response ...
	
	
	==> kernel <==
	 19:17:57 up  3:00,  0 users,  load average: 0.35, 1.57, 2.65
	Linux addons-764717 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [19078c79e63a87d96a4a8501f3c93a1e8436ec9627a05c33a785226f994b44eb] <==
	E0819 19:16:44.436269       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 19:16:51.426047       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 19:16:51.426242       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 19:16:52.230533       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 19:16:52.230746       1 main.go:299] handling current node
	I0819 19:17:02.230655       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 19:17:02.230767       1 main.go:299] handling current node
	I0819 19:17:12.230560       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 19:17:12.230654       1 main.go:299] handling current node
	I0819 19:17:22.230864       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 19:17:22.230910       1 main.go:299] handling current node
	W0819 19:17:22.337026       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 19:17:22.337058       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 19:17:24.188795       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 19:17:24.188829       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 19:17:32.120932       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 19:17:32.121075       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 19:17:32.230252       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 19:17:32.230290       1 main.go:299] handling current node
	I0819 19:17:42.230644       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 19:17:42.230695       1 main.go:299] handling current node
	I0819 19:17:52.230223       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 19:17:52.230258       1 main.go:299] handling current node
	W0819 19:17:53.147622       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 19:17:53.147695       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [df82c63c0408e8a5c696a5f1f7661018468d7ff271cf35a213f79106d24094b1] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0819 19:13:45.734692       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.249.178:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.249.178:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.249.178:443: connect: connection refused" logger="UnhandledError"
	I0819 19:13:45.841030       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0819 19:13:50.597879       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.11:443: connect: connection refused
	W0819 19:13:51.625345       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.11:443: connect: connection refused
	W0819 19:13:52.723054       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.11:443: connect: connection refused
	W0819 19:13:53.760912       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.11:443: connect: connection refused
	W0819 19:13:53.950570       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.95.114:443: connect: connection refused
	E0819 19:13:53.950619       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.95.114:443: connect: connection refused" logger="UnhandledError"
	W0819 19:13:53.952423       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.94.11:443: connect: connection refused
	W0819 19:13:54.815705       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.11:443: connect: connection refused
	W0819 19:13:55.841178       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.11:443: connect: connection refused
	W0819 19:13:56.924927       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.11:443: connect: connection refused
	W0819 19:13:57.989205       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.11:443: connect: connection refused
	W0819 19:13:59.035401       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.11:443: connect: connection refused
	W0819 19:14:14.935764       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.95.114:443: connect: connection refused
	E0819 19:14:14.935801       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.95.114:443: connect: connection refused" logger="UnhandledError"
	W0819 19:14:14.983993       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.95.114:443: connect: connection refused
	E0819 19:14:14.984031       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.95.114:443: connect: connection refused" logger="UnhandledError"
	W0819 19:14:34.917680       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.95.114:443: connect: connection refused
	E0819 19:14:34.917720       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.95.114:443: connect: connection refused" logger="UnhandledError"
	I0819 19:14:54.567107       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0819 19:14:54.613326       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [792c24c65076cf5fb69f909f0075b39e179d0225fd83ba6baff250fbe20be90b] <==
	I0819 19:14:18.385304       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 19:14:19.077715       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 19:14:19.092946       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 19:14:19.105461       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 19:14:19.107673       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 19:14:19.129804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="70.326µs"
	I0819 19:14:19.243000       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 19:14:20.116113       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 19:14:20.124473       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 19:14:20.132482       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 19:14:26.648090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-764717"
	I0819 19:14:32.373712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="18.474609ms"
	I0819 19:14:32.374030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="86.957µs"
	I0819 19:14:34.950767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="35.645009ms"
	I0819 19:14:34.959919       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="8.924762ms"
	I0819 19:14:34.960234       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="77.702µs"
	I0819 19:14:34.970794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="50.388µs"
	I0819 19:14:37.198651       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="17.580119ms"
	I0819 19:14:37.199948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="40.73µs"
	I0819 19:14:49.024944       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0819 19:14:49.061100       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0819 19:14:50.018771       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0819 19:14:50.063898       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0819 19:14:54.248244       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0819 19:14:57.233992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-764717"
	
	
	==> kube-proxy [7a521c239c7683077504245bc1823bddd78ea79bed710602c4939e7668695234] <==
	I0819 19:13:00.398065       1 server_linux.go:66] "Using iptables proxy"
	I0819 19:13:00.539870       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 19:13:00.539937       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:13:00.631843       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 19:13:00.631902       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:13:00.634903       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:13:00.635351       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:13:00.635377       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:13:00.642855       1 config.go:197] "Starting service config controller"
	I0819 19:13:00.642879       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:13:00.642898       1 config.go:326] "Starting node config controller"
	I0819 19:13:00.642910       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:13:00.642914       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:13:00.642919       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:13:00.743316       1 shared_informer.go:320] Caches are synced for node config
	I0819 19:13:00.743577       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:13:00.743625       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [9859f5ff72b3571c6beea26b72108f1a372219374a492e384f22accad01f1356] <==
	E0819 19:12:51.912718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:12:51.912915       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 19:12:51.913010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:12:51.911184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 19:12:51.913177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:12:51.911222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 19:12:51.913265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:12:51.913395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 19:12:51.913416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:12:51.913528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:12:51.913572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:12:51.913715       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 19:12:51.913756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0819 19:12:51.913057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:12:51.913877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 19:12:51.913899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:12:51.913986       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 19:12:51.914116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:12:51.915192       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 19:12:51.915221       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 19:12:52.748153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 19:12:52.748395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:12:52.805887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 19:12:52.806005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0819 19:12:53.303903       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:16:19 addons-764717 kubelet[1488]: I0819 19:16:19.398997    1488 scope.go:117] "RemoveContainer" containerID="c1daa53c5907bfccb3e5cbe45e2c5463cf7b6bc57788c27351f3edf4a702139b"
	Aug 19 19:16:19 addons-764717 kubelet[1488]: E0819 19:16:19.399204    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-hhx62_gadget(7c1eea9a-ce28-4d01-9430-a2a295695f96)\"" pod="gadget/gadget-hhx62" podUID="7c1eea9a-ce28-4d01-9430-a2a295695f96"
	Aug 19 19:16:30 addons-764717 kubelet[1488]: I0819 19:16:30.399194    1488 scope.go:117] "RemoveContainer" containerID="c1daa53c5907bfccb3e5cbe45e2c5463cf7b6bc57788c27351f3edf4a702139b"
	Aug 19 19:16:32 addons-764717 kubelet[1488]: I0819 19:16:32.527091    1488 scope.go:117] "RemoveContainer" containerID="c1daa53c5907bfccb3e5cbe45e2c5463cf7b6bc57788c27351f3edf4a702139b"
	Aug 19 19:16:32 addons-764717 kubelet[1488]: I0819 19:16:32.527834    1488 scope.go:117] "RemoveContainer" containerID="54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85"
	Aug 19 19:16:32 addons-764717 kubelet[1488]: E0819 19:16:32.528132    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hhx62_gadget(7c1eea9a-ce28-4d01-9430-a2a295695f96)\"" pod="gadget/gadget-hhx62" podUID="7c1eea9a-ce28-4d01-9430-a2a295695f96"
	Aug 19 19:16:33 addons-764717 kubelet[1488]: I0819 19:16:33.531615    1488 scope.go:117] "RemoveContainer" containerID="54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85"
	Aug 19 19:16:33 addons-764717 kubelet[1488]: E0819 19:16:33.531787    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hhx62_gadget(7c1eea9a-ce28-4d01-9430-a2a295695f96)\"" pod="gadget/gadget-hhx62" podUID="7c1eea9a-ce28-4d01-9430-a2a295695f96"
	Aug 19 19:16:34 addons-764717 kubelet[1488]: I0819 19:16:34.534563    1488 scope.go:117] "RemoveContainer" containerID="54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85"
	Aug 19 19:16:34 addons-764717 kubelet[1488]: E0819 19:16:34.534764    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hhx62_gadget(7c1eea9a-ce28-4d01-9430-a2a295695f96)\"" pod="gadget/gadget-hhx62" podUID="7c1eea9a-ce28-4d01-9430-a2a295695f96"
	Aug 19 19:16:46 addons-764717 kubelet[1488]: I0819 19:16:46.399103    1488 scope.go:117] "RemoveContainer" containerID="54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85"
	Aug 19 19:16:46 addons-764717 kubelet[1488]: E0819 19:16:46.399309    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hhx62_gadget(7c1eea9a-ce28-4d01-9430-a2a295695f96)\"" pod="gadget/gadget-hhx62" podUID="7c1eea9a-ce28-4d01-9430-a2a295695f96"
	Aug 19 19:17:00 addons-764717 kubelet[1488]: I0819 19:17:00.400696    1488 scope.go:117] "RemoveContainer" containerID="54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85"
	Aug 19 19:17:00 addons-764717 kubelet[1488]: E0819 19:17:00.400944    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hhx62_gadget(7c1eea9a-ce28-4d01-9430-a2a295695f96)\"" pod="gadget/gadget-hhx62" podUID="7c1eea9a-ce28-4d01-9430-a2a295695f96"
	Aug 19 19:17:12 addons-764717 kubelet[1488]: I0819 19:17:12.398538    1488 scope.go:117] "RemoveContainer" containerID="54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85"
	Aug 19 19:17:12 addons-764717 kubelet[1488]: E0819 19:17:12.399228    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hhx62_gadget(7c1eea9a-ce28-4d01-9430-a2a295695f96)\"" pod="gadget/gadget-hhx62" podUID="7c1eea9a-ce28-4d01-9430-a2a295695f96"
	Aug 19 19:17:17 addons-764717 kubelet[1488]: I0819 19:17:17.398978    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-mdlgd" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 19:17:23 addons-764717 kubelet[1488]: I0819 19:17:23.398627    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-kvhw5" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 19:17:23 addons-764717 kubelet[1488]: I0819 19:17:23.399532    1488 scope.go:117] "RemoveContainer" containerID="54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85"
	Aug 19 19:17:23 addons-764717 kubelet[1488]: E0819 19:17:23.399762    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hhx62_gadget(7c1eea9a-ce28-4d01-9430-a2a295695f96)\"" pod="gadget/gadget-hhx62" podUID="7c1eea9a-ce28-4d01-9430-a2a295695f96"
	Aug 19 19:17:25 addons-764717 kubelet[1488]: I0819 19:17:25.398721    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-95ls7" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 19:17:37 addons-764717 kubelet[1488]: I0819 19:17:37.398655    1488 scope.go:117] "RemoveContainer" containerID="54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85"
	Aug 19 19:17:37 addons-764717 kubelet[1488]: E0819 19:17:37.398971    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hhx62_gadget(7c1eea9a-ce28-4d01-9430-a2a295695f96)\"" pod="gadget/gadget-hhx62" podUID="7c1eea9a-ce28-4d01-9430-a2a295695f96"
	Aug 19 19:17:48 addons-764717 kubelet[1488]: I0819 19:17:48.398959    1488 scope.go:117] "RemoveContainer" containerID="54bbac5fa3417d1aa58135db4a912b6d3e7ec0f38c9bd50dcb887b9dc7086d85"
	Aug 19 19:17:48 addons-764717 kubelet[1488]: E0819 19:17:48.399666    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hhx62_gadget(7c1eea9a-ce28-4d01-9430-a2a295695f96)\"" pod="gadget/gadget-hhx62" podUID="7c1eea9a-ce28-4d01-9430-a2a295695f96"
	
	
	==> storage-provisioner [92e2ec280c809c1dc567a5125c5f478c5f8c12329f8cb1da0a817608b0522a93] <==
	I0819 19:13:04.818644       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 19:13:04.832573       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 19:13:04.832622       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 19:13:04.844878       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 19:13:04.845051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-764717_8513f15c-2007-4084-a505-28ff7a064b16!
	I0819 19:13:04.846105       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80ad29e5-123b-41ca-9cd2-4a9a62b5031c", APIVersion:"v1", ResourceVersion:"548", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-764717_8513f15c-2007-4084-a505-28ff7a064b16 became leader
	I0819 19:13:04.945578       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-764717_8513f15c-2007-4084-a505-28ff7a064b16!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-764717 -n addons-764717
helpers_test.go:261: (dbg) Run:  kubectl --context addons-764717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-sfzxn ingress-nginx-admission-patch-nds9b test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-764717 describe pod ingress-nginx-admission-create-sfzxn ingress-nginx-admission-patch-nds9b test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-764717 describe pod ingress-nginx-admission-create-sfzxn ingress-nginx-admission-patch-nds9b test-job-nginx-0: exit status 1 (91.049266ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sfzxn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-nds9b" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-764717 describe pod ingress-nginx-admission-create-sfzxn ingress-nginx-admission-patch-nds9b test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.25s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.46
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 5.85
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.22
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 150.9
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 16.68
34 TestAddons/parallel/Ingress 20.22
35 TestAddons/parallel/InspektorGadget 10.89
36 TestAddons/parallel/MetricsServer 6.86
39 TestAddons/parallel/CSI 50.02
40 TestAddons/parallel/Headlamp 17.96
41 TestAddons/parallel/CloudSpanner 5.74
42 TestAddons/parallel/LocalPath 53.4
43 TestAddons/parallel/NvidiaDevicePlugin 5.9
44 TestAddons/parallel/Yakd 11.85
45 TestAddons/StoppedEnableDisable 12.32
46 TestCertOptions 43.37
47 TestCertExpiration 232.91
49 TestForceSystemdFlag 44.05
50 TestForceSystemdEnv 40.42
51 TestDockerEnvContainerd 47.95
56 TestErrorSpam/setup 29.96
57 TestErrorSpam/start 0.82
58 TestErrorSpam/status 1.32
59 TestErrorSpam/pause 1.98
60 TestErrorSpam/unpause 1.92
61 TestErrorSpam/stop 1.53
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 50.71
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.59
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.36
73 TestFunctional/serial/CacheCmd/cache/add_local 1.34
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.34
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.15
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 44.31
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.75
84 TestFunctional/serial/LogsFileCmd 1.81
85 TestFunctional/serial/InvalidService 4.93
87 TestFunctional/parallel/ConfigCmd 0.54
88 TestFunctional/parallel/DashboardCmd 7.36
89 TestFunctional/parallel/DryRun 0.53
90 TestFunctional/parallel/InternationalLanguage 0.25
91 TestFunctional/parallel/StatusCmd 1.02
95 TestFunctional/parallel/ServiceCmdConnect 6.67
96 TestFunctional/parallel/AddonsCmd 0.18
97 TestFunctional/parallel/PersistentVolumeClaim 23.81
99 TestFunctional/parallel/SSHCmd 0.53
100 TestFunctional/parallel/CpCmd 2.03
102 TestFunctional/parallel/FileSync 0.37
103 TestFunctional/parallel/CertSync 2.17
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.76
111 TestFunctional/parallel/License 0.29
112 TestFunctional/parallel/Version/short 0.1
113 TestFunctional/parallel/Version/components 1.45
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.32
119 TestFunctional/parallel/ImageCommands/Setup 0.71
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.57
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.91
125 TestFunctional/parallel/ServiceCmd/DeployApp 11.25
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.69
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
136 TestFunctional/parallel/ServiceCmd/List 0.37
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
139 TestFunctional/parallel/ServiceCmd/Format 0.38
140 TestFunctional/parallel/ServiceCmd/URL 0.38
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
148 TestFunctional/parallel/ProfileCmd/profile_list 0.39
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
150 TestFunctional/parallel/MountCmd/any-port 8.96
151 TestFunctional/parallel/MountCmd/specific-port 1.15
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 118.02
160 TestMultiControlPlane/serial/DeployApp 30.79
161 TestMultiControlPlane/serial/PingHostFromPods 1.62
162 TestMultiControlPlane/serial/AddWorkerNode 23.06
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.76
165 TestMultiControlPlane/serial/CopyFile 20.15
166 TestMultiControlPlane/serial/StopSecondaryNode 12.92
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.33
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.17
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 131.88
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.97
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
173 TestMultiControlPlane/serial/StopCluster 36.14
174 TestMultiControlPlane/serial/RestartCluster 64.91
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
176 TestMultiControlPlane/serial/AddSecondaryNode 41.2
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.78
181 TestJSONOutput/start/Command 50.69
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.76
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.67
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.8
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 40.96
207 TestKicCustomNetwork/use_default_bridge_network 33.35
208 TestKicExistingNetwork 36.95
209 TestKicCustomSubnet 34.12
210 TestKicStaticIP 34.72
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 71.33
215 TestMountStart/serial/StartWithMountFirst 6.29
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 8.95
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.6
220 TestMountStart/serial/VerifyMountPostDelete 0.44
221 TestMountStart/serial/Stop 1.26
222 TestMountStart/serial/RestartStopped 8.25
223 TestMountStart/serial/VerifyMountPostStop 0.28
226 TestMultiNode/serial/FreshStart2Nodes 69.95
227 TestMultiNode/serial/DeployApp2Nodes 17.95
228 TestMultiNode/serial/PingHostFrom2Pods 1.1
229 TestMultiNode/serial/AddNode 16.01
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.35
232 TestMultiNode/serial/CopyFile 10.29
233 TestMultiNode/serial/StopNode 2.23
234 TestMultiNode/serial/StartAfterStop 9.6
235 TestMultiNode/serial/RestartKeepsNodes 97.82
236 TestMultiNode/serial/DeleteNode 5.62
237 TestMultiNode/serial/StopMultiNode 23.96
238 TestMultiNode/serial/RestartMultiNode 47.88
239 TestMultiNode/serial/ValidateNameConflict 33.14
244 TestPreload 111.57
246 TestScheduledStopUnix 105.74
249 TestInsufficientStorage 10.85
250 TestRunningBinaryUpgrade 84.53
252 TestKubernetesUpgrade 349.32
253 TestMissingContainerUpgrade 172.37
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 39.09
257 TestNoKubernetes/serial/StartWithStopK8s 19.03
258 TestNoKubernetes/serial/Start 6.3
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
260 TestNoKubernetes/serial/ProfileList 1.2
261 TestNoKubernetes/serial/Stop 1.27
262 TestNoKubernetes/serial/StartNoArgs 7.01
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
271 TestNetworkPlugins/group/false 4.76
275 TestStoppedBinaryUpgrade/Setup 1.34
276 TestStoppedBinaryUpgrade/Upgrade 100.2
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
286 TestPause/serial/Start 62.22
287 TestPause/serial/SecondStartNoReconfiguration 6.47
288 TestPause/serial/Pause 0.91
289 TestPause/serial/VerifyStatus 0.32
290 TestPause/serial/Unpause 0.67
291 TestPause/serial/PauseAgain 0.82
292 TestPause/serial/DeletePaused 2.63
293 TestPause/serial/VerifyDeletedResources 0.34
294 TestNetworkPlugins/group/auto/Start 50.58
295 TestNetworkPlugins/group/auto/KubeletFlags 0.39
296 TestNetworkPlugins/group/auto/NetCatPod 10.44
297 TestNetworkPlugins/group/auto/DNS 0.27
298 TestNetworkPlugins/group/auto/Localhost 0.26
299 TestNetworkPlugins/group/auto/HairPin 0.27
300 TestNetworkPlugins/group/kindnet/Start 58.89
301 TestNetworkPlugins/group/calico/Start 69.43
302 TestNetworkPlugins/group/kindnet/ControllerPod 6
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
304 TestNetworkPlugins/group/kindnet/NetCatPod 10.51
305 TestNetworkPlugins/group/kindnet/DNS 0.31
306 TestNetworkPlugins/group/kindnet/Localhost 0.17
307 TestNetworkPlugins/group/kindnet/HairPin 0.19
308 TestNetworkPlugins/group/calico/ControllerPod 6.01
309 TestNetworkPlugins/group/calico/KubeletFlags 0.4
310 TestNetworkPlugins/group/calico/NetCatPod 10.37
311 TestNetworkPlugins/group/custom-flannel/Start 60.5
312 TestNetworkPlugins/group/calico/DNS 0.36
313 TestNetworkPlugins/group/calico/Localhost 0.37
314 TestNetworkPlugins/group/calico/HairPin 0.49
315 TestNetworkPlugins/group/enable-default-cni/Start 73.99
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.42
318 TestNetworkPlugins/group/custom-flannel/DNS 0.28
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
321 TestNetworkPlugins/group/flannel/Start 51.59
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.66
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.32
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
327 TestNetworkPlugins/group/bridge/Start 77.51
328 TestNetworkPlugins/group/flannel/ControllerPod 6.02
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
330 TestNetworkPlugins/group/flannel/NetCatPod 9.32
331 TestNetworkPlugins/group/flannel/DNS 0.25
332 TestNetworkPlugins/group/flannel/Localhost 0.21
333 TestNetworkPlugins/group/flannel/HairPin 0.18
335 TestStartStop/group/old-k8s-version/serial/FirstStart 135.09
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
337 TestNetworkPlugins/group/bridge/NetCatPod 13.5
338 TestNetworkPlugins/group/bridge/DNS 0.23
339 TestNetworkPlugins/group/bridge/Localhost 0.19
340 TestNetworkPlugins/group/bridge/HairPin 0.18
342 TestStartStop/group/no-preload/serial/FirstStart 72.04
343 TestStartStop/group/old-k8s-version/serial/DeployApp 9.57
344 TestStartStop/group/no-preload/serial/DeployApp 8.35
345 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.33
346 TestStartStop/group/old-k8s-version/serial/Stop 12.11
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
348 TestStartStop/group/no-preload/serial/Stop 12.09
349 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
350 TestStartStop/group/old-k8s-version/serial/SecondStart 308.95
351 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
352 TestStartStop/group/no-preload/serial/SecondStart 302.45
353 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
354 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
356 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.22
357 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
358 TestStartStop/group/no-preload/serial/Pause 3.95
359 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
360 TestStartStop/group/old-k8s-version/serial/Pause 4.44
362 TestStartStop/group/embed-certs/serial/FirstStart 70.16
364 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.53
365 TestStartStop/group/embed-certs/serial/DeployApp 8.33
366 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
367 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.33
368 TestStartStop/group/embed-certs/serial/Stop 12.19
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
370 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
371 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
372 TestStartStop/group/embed-certs/serial/SecondStart 268.53
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
374 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 272.58
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
378 TestStartStop/group/embed-certs/serial/Pause 3.28
379 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
382 TestStartStop/group/newest-cni/serial/FirstStart 41.22
383 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
384 TestStartStop/group/default-k8s-diff-port/serial/Pause 4
385 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
387 TestStartStop/group/newest-cni/serial/Stop 1.24
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
389 TestStartStop/group/newest-cni/serial/SecondStart 15.66
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
393 TestStartStop/group/newest-cni/serial/Pause 3.17
x
+
TestDownloadOnly/v1.20.0/json-events (7.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-477663 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-477663 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.454871199s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-477663
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-477663: exit status 85 (67.103472ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-477663 | jenkins | v1.33.1 | 19 Aug 24 19:11 UTC |          |
	|         | -p download-only-477663        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:11:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:11:51.575632  719057 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:11:51.575842  719057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:11:51.575870  719057 out.go:358] Setting ErrFile to fd 2...
	I0819 19:11:51.575888  719057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:11:51.576149  719057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
	W0819 19:11:51.576322  719057 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19468-713648/.minikube/config/config.json: open /home/jenkins/minikube-integration/19468-713648/.minikube/config/config.json: no such file or directory
	I0819 19:11:51.576792  719057 out.go:352] Setting JSON to true
	I0819 19:11:51.577795  719057 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10453,"bootTime":1724084259,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 19:11:51.577894  719057 start.go:139] virtualization:  
	I0819 19:11:51.580091  719057 out.go:97] [download-only-477663] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0819 19:11:51.580303  719057 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19468-713648/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 19:11:51.580349  719057 notify.go:220] Checking for updates...
	I0819 19:11:51.582085  719057 out.go:169] MINIKUBE_LOCATION=19468
	I0819 19:11:51.583501  719057 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:11:51.585015  719057 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	I0819 19:11:51.586963  719057 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	I0819 19:11:51.588310  719057 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 19:11:51.590907  719057 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 19:11:51.591157  719057 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:11:51.612276  719057 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 19:11:51.612391  719057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:11:51.675304  719057 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 19:11:51.666068971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:11:51.675415  719057 docker.go:307] overlay module found
	I0819 19:11:51.676884  719057 out.go:97] Using the docker driver based on user configuration
	I0819 19:11:51.676907  719057 start.go:297] selected driver: docker
	I0819 19:11:51.676913  719057 start.go:901] validating driver "docker" against <nil>
	I0819 19:11:51.677041  719057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:11:51.729690  719057 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 19:11:51.719453653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:11:51.729950  719057 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 19:11:51.730299  719057 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 19:11:51.730520  719057 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 19:11:51.732080  719057 out.go:169] Using Docker driver with root privileges
	I0819 19:11:51.733429  719057 cni.go:84] Creating CNI manager for ""
	I0819 19:11:51.733460  719057 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 19:11:51.733471  719057 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 19:11:51.733572  719057 start.go:340] cluster config:
	{Name:download-only-477663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-477663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:11:51.735395  719057 out.go:97] Starting "download-only-477663" primary control-plane node in "download-only-477663" cluster
	I0819 19:11:51.735429  719057 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 19:11:51.736624  719057 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 19:11:51.736677  719057 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 19:11:51.736752  719057 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 19:11:51.751563  719057 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 19:11:51.751757  719057 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 19:11:51.751859  719057 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 19:11:51.801456  719057 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 19:11:51.801497  719057 cache.go:56] Caching tarball of preloaded images
	I0819 19:11:51.801664  719057 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 19:11:51.803508  719057 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 19:11:51.803585  719057 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 19:11:51.894143  719057 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19468-713648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 19:11:56.485037  719057 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	
	
	* The control-plane node download-only-477663 host does not exist
	  To start a cluster, run: "minikube start -p download-only-477663"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-477663
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-172961 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-172961 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.845572555s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-172961
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-172961: exit status 85 (76.156862ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-477663 | jenkins | v1.33.1 | 19 Aug 24 19:11 UTC |                     |
	|         | -p download-only-477663        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 19:11 UTC | 19 Aug 24 19:11 UTC |
	| delete  | -p download-only-477663        | download-only-477663 | jenkins | v1.33.1 | 19 Aug 24 19:11 UTC | 19 Aug 24 19:11 UTC |
	| start   | -o=json --download-only        | download-only-172961 | jenkins | v1.33.1 | 19 Aug 24 19:11 UTC |                     |
	|         | -p download-only-172961        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:11:59
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:11:59.422221  719259 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:11:59.422413  719259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:11:59.422445  719259 out.go:358] Setting ErrFile to fd 2...
	I0819 19:11:59.422465  719259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:11:59.422788  719259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
	I0819 19:11:59.423269  719259 out.go:352] Setting JSON to true
	I0819 19:11:59.424217  719259 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10461,"bootTime":1724084259,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 19:11:59.424318  719259 start.go:139] virtualization:  
	I0819 19:11:59.426562  719259 out.go:97] [download-only-172961] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 19:11:59.426857  719259 notify.go:220] Checking for updates...
	I0819 19:11:59.428594  719259 out.go:169] MINIKUBE_LOCATION=19468
	I0819 19:11:59.429819  719259 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:11:59.431241  719259 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	I0819 19:11:59.432791  719259 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	I0819 19:11:59.434104  719259 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 19:11:59.436836  719259 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 19:11:59.437114  719259 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:11:59.458942  719259 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 19:11:59.459060  719259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:11:59.525737  719259 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 19:11:59.515906859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:11:59.525853  719259 docker.go:307] overlay module found
	I0819 19:11:59.527469  719259 out.go:97] Using the docker driver based on user configuration
	I0819 19:11:59.527501  719259 start.go:297] selected driver: docker
	I0819 19:11:59.527507  719259 start.go:901] validating driver "docker" against <nil>
	I0819 19:11:59.527628  719259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:11:59.587954  719259 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 19:11:59.579234095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:11:59.588115  719259 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 19:11:59.588433  719259 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 19:11:59.588586  719259 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 19:11:59.590259  719259 out.go:169] Using Docker driver with root privileges
	I0819 19:11:59.592432  719259 cni.go:84] Creating CNI manager for ""
	I0819 19:11:59.592457  719259 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 19:11:59.592470  719259 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 19:11:59.592555  719259 start.go:340] cluster config:
	{Name:download-only-172961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-172961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:11:59.594136  719259 out.go:97] Starting "download-only-172961" primary control-plane node in "download-only-172961" cluster
	I0819 19:11:59.594166  719259 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 19:11:59.595844  719259 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 19:11:59.595879  719259 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 19:11:59.596068  719259 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 19:11:59.610886  719259 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 19:11:59.611012  719259 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 19:11:59.611038  719259 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 19:11:59.611044  719259 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 19:11:59.611055  719259 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 19:11:59.669157  719259 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 19:11:59.669185  719259 cache.go:56] Caching tarball of preloaded images
	I0819 19:11:59.669756  719259 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 19:11:59.671318  719259 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 19:11:59.671349  719259 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 19:11:59.752308  719259 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19468-713648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 19:12:03.628210  719259 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 19:12:03.628391  719259 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19468-713648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-172961 host does not exist
	  To start a cluster, run: "minikube start -p download-only-172961"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-172961
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-918461 --alsologtostderr --binary-mirror http://127.0.0.1:33127 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-918461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-918461
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-764717
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-764717: exit status 85 (65.376032ms)

                                                
                                                
-- stdout --
	* Profile "addons-764717" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-764717"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-764717
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-764717: exit status 85 (65.454775ms)

                                                
                                                
-- stdout --
	* Profile "addons-764717" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-764717"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (150.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-764717 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-764717 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m30.897536969s)
--- PASS: TestAddons/Setup (150.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-764717 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-764717 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.711083ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-mdlgd" [e16db8aa-91f9-43cd-aa8f-55fb34b58974] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003203188s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-95ls7" [4f8e0022-33fe-4566-87c6-b25e8349fbbf] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003649372s
addons_test.go:342: (dbg) Run:  kubectl --context addons-764717 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-764717 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-764717 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.641949595s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 ip
2024/08/19 19:18:35 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.68s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-764717 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-764717 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-764717 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [48377ff1-b525-4f90-a404-b1afb06814fd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [48377ff1-b525-4f90-a404-b1afb06814fd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003199089s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-764717 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-764717 addons disable ingress-dns --alsologtostderr -v=1: (1.594888491s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-764717 addons disable ingress --alsologtostderr -v=1: (7.890873236s)
--- PASS: TestAddons/parallel/Ingress (20.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hhx62" [7c1eea9a-ce28-4d01-9430-a2a295695f96] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005253481s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-764717
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-764717: (5.882843477s)
--- PASS: TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.616552ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-z27w8" [02399b78-3ca4-4bba-bfcf-3a75829d8cd2] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004494875s
addons_test.go:417: (dbg) Run:  kubectl --context addons-764717 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.045398ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-764717 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-764717 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1b44b5d9-24f2-44a5-a16b-8aada64733c9] Pending
helpers_test.go:344: "task-pv-pod" [1b44b5d9-24f2-44a5-a16b-8aada64733c9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1b44b5d9-24f2-44a5-a16b-8aada64733c9] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003881667s
addons_test.go:590: (dbg) Run:  kubectl --context addons-764717 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-764717 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-764717 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-764717 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-764717 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-764717 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-764717 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [118c947a-5a92-4360-a70f-08e9c98e3e4f] Pending
helpers_test.go:344: "task-pv-pod-restore" [118c947a-5a92-4360-a70f-08e9c98e3e4f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [118c947a-5a92-4360-a70f-08e9c98e3e4f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005269212s
addons_test.go:632: (dbg) Run:  kubectl --context addons-764717 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-764717 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-764717 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-764717 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.766175744s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-764717 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-764717 --alsologtostderr -v=1: (1.128505779s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-f2dsc" [97f6b6cb-c11e-49c6-bf47-e18e1659c650] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-f2dsc" [97f6b6cb-c11e-49c6-bf47-e18e1659c650] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-f2dsc" [97f6b6cb-c11e-49c6-bf47-e18e1659c650] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003582733s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-764717 addons disable headlamp --alsologtostderr -v=1: (5.827450621s)
--- PASS: TestAddons/parallel/Headlamp (17.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-sp82d" [20834b85-ea67-463f-af08-9c318f5abe43] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004673094s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-764717
--- PASS: TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-764717 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-764717 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764717 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a3a47af3-b23d-4b02-abe1-d0d2d8c0d9a8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a3a47af3-b23d-4b02-abe1-d0d2d8c0d9a8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a3a47af3-b23d-4b02-abe1-d0d2d8c0d9a8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004129214s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-764717 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 ssh "cat /opt/local-path-provisioner/pvc-d1db1f4a-613a-4c2d-b96c-2758377472b8_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-764717 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-764717 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-764717 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.665951336s)
--- PASS: TestAddons/parallel/LocalPath (53.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kvhw5" [7ace71d8-efe9-4b5c-92a1-a84980af040a] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005936027s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-764717
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.90s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xmh9p" [339a9e97-6229-4f8f-85a7-ac3284fde369] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.008108813s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-764717 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-764717 addons disable yakd --alsologtostderr -v=1: (5.845316431s)
--- PASS: TestAddons/parallel/Yakd (11.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-764717
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-764717: (12.036502216s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-764717
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-764717
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-764717
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (43.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-991948 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-991948 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (40.140212454s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-991948 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-991948 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-991948 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-991948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-991948
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-991948: (2.355705255s)
--- PASS: TestCertOptions (43.37s)

                                                
                                    
x
+
TestCertExpiration (232.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-763837 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-763837 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (43.319129109s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-763837 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-763837 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.2267098s)
helpers_test.go:175: Cleaning up "cert-expiration-763837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-763837
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-763837: (2.356982342s)
--- PASS: TestCertExpiration (232.91s)

                                                
                                    
x
+
TestForceSystemdFlag (44.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-452627 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-452627 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.899303395s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-452627 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-452627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-452627
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-452627: (2.55800269s)
--- PASS: TestForceSystemdFlag (44.05s)

                                                
                                    
x
+
TestForceSystemdEnv (40.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-658977 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-658977 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.156497275s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-658977 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-658977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-658977
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-658977: (1.979148083s)
--- PASS: TestForceSystemdEnv (40.42s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.95s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-095548 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-095548 --driver=docker  --container-runtime=containerd: (32.213064811s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-095548"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-095548": (1.021590897s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sksAvOfOMTli/agent.738332" SSH_AGENT_PID="738333" DOCKER_HOST=ssh://docker@127.0.0.1:33533 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sksAvOfOMTli/agent.738332" SSH_AGENT_PID="738333" DOCKER_HOST=ssh://docker@127.0.0.1:33533 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sksAvOfOMTli/agent.738332" SSH_AGENT_PID="738333" DOCKER_HOST=ssh://docker@127.0.0.1:33533 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.052109468s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sksAvOfOMTli/agent.738332" SSH_AGENT_PID="738333" DOCKER_HOST=ssh://docker@127.0.0.1:33533 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sksAvOfOMTli/agent.738332" SSH_AGENT_PID="738333" DOCKER_HOST=ssh://docker@127.0.0.1:33533 docker image ls": (1.060746092s)
helpers_test.go:175: Cleaning up "dockerenv-095548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-095548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-095548: (1.972489859s)
--- PASS: TestDockerEnvContainerd (47.95s)

                                                
                                    
x
+
TestErrorSpam/setup (29.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-946908 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-946908 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-946908 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-946908 --driver=docker  --container-runtime=containerd: (29.956961897s)
--- PASS: TestErrorSpam/setup (29.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 status
--- PASS: TestErrorSpam/status (1.32s)

                                                
                                    
x
+
TestErrorSpam/pause (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 pause
--- PASS: TestErrorSpam/pause (1.98s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 unpause
--- PASS: TestErrorSpam/unpause (1.92s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 stop: (1.3264453s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-946908 --log_dir /tmp/nospam-946908 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19468-713648/.minikube/files/etc/test/nested/copy/719052/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-559559 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-559559 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (50.706060276s)
--- PASS: TestFunctional/serial/StartWithProxy (50.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-559559 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-559559 --alsologtostderr -v=8: (6.585048759s)
functional_test.go:663: soft start took 6.588200012s for "functional-559559" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-559559 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 cache add registry.k8s.io/pause:3.1: (1.585420114s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 cache add registry.k8s.io/pause:3.3: (1.523619145s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 cache add registry.k8s.io/pause:latest: (1.245901547s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-559559 /tmp/TestFunctionalserialCacheCmdcacheadd_local638716230/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 cache add minikube-local-cache-test:functional-559559
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 cache delete minikube-local-cache-test:functional-559559
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-559559
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-559559 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.086611ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 cache reload: (1.427725976s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 kubectl -- --context functional-559559 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-559559 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-559559 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-559559 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.30726582s)
functional_test.go:761: restart took 44.307368606s for "functional-559559" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-559559 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 logs: (1.754039716s)
--- PASS: TestFunctional/serial/LogsCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 logs --file /tmp/TestFunctionalserialLogsFileCmd1394683487/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 logs --file /tmp/TestFunctionalserialLogsFileCmd1394683487/001/logs.txt: (1.803515713s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-559559 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-559559
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-559559: exit status 115 (572.47036ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31987 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-559559 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-559559 delete -f testdata/invalidsvc.yaml: (1.078886865s)
--- PASS: TestFunctional/serial/InvalidService (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-559559 config get cpus: exit status 14 (116.719502ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-559559 config get cpus: exit status 14 (93.167796ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-559559 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-559559 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 755528: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-559559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-559559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (211.078599ms)

                                                
                                                
-- stdout --
	* [functional-559559] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:24:35.968419  755065 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:24:35.968671  755065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:24:35.968685  755065 out.go:358] Setting ErrFile to fd 2...
	I0819 19:24:35.968692  755065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:24:35.968988  755065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
	I0819 19:24:35.969434  755065 out.go:352] Setting JSON to false
	I0819 19:24:35.970507  755065 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11217,"bootTime":1724084259,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 19:24:35.970586  755065 start.go:139] virtualization:  
	I0819 19:24:35.973321  755065 out.go:177] * [functional-559559] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 19:24:35.974868  755065 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:24:35.974946  755065 notify.go:220] Checking for updates...
	I0819 19:24:35.978664  755065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:24:35.980703  755065 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	I0819 19:24:35.982310  755065 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	I0819 19:24:35.983636  755065 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 19:24:35.985228  755065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:24:35.987297  755065 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 19:24:35.987970  755065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:24:36.020901  755065 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 19:24:36.021050  755065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:24:36.110013  755065 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 19:24:36.099532195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:24:36.110137  755065 docker.go:307] overlay module found
	I0819 19:24:36.112388  755065 out.go:177] * Using the docker driver based on existing profile
	I0819 19:24:36.114542  755065 start.go:297] selected driver: docker
	I0819 19:24:36.114576  755065 start.go:901] validating driver "docker" against &{Name:functional-559559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-559559 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:24:36.114879  755065 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:24:36.117481  755065 out.go:201] 
	W0819 19:24:36.119423  755065 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 19:24:36.121268  755065 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-559559 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-559559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-559559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (251.406737ms)

                                                
                                                
-- stdout --
	* [functional-559559] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:24:36.524332  755193 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:24:36.524621  755193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:24:36.524634  755193 out.go:358] Setting ErrFile to fd 2...
	I0819 19:24:36.524640  755193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:24:36.530854  755193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
	I0819 19:24:36.531355  755193 out.go:352] Setting JSON to false
	I0819 19:24:36.532305  755193 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11218,"bootTime":1724084259,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 19:24:36.532381  755193 start.go:139] virtualization:  
	I0819 19:24:36.534640  755193 out.go:177] * [functional-559559] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0819 19:24:36.537036  755193 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:24:36.537303  755193 notify.go:220] Checking for updates...
	I0819 19:24:36.540460  755193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:24:36.541952  755193 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	I0819 19:24:36.543325  755193 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	I0819 19:24:36.544685  755193 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 19:24:36.546097  755193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:24:36.548068  755193 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 19:24:36.548676  755193 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:24:36.571083  755193 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 19:24:36.571318  755193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:24:36.683242  755193 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 19:24:36.666296265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:24:36.683394  755193 docker.go:307] overlay module found
	I0819 19:24:36.685647  755193 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0819 19:24:36.687495  755193 start.go:297] selected driver: docker
	I0819 19:24:36.687512  755193 start.go:901] validating driver "docker" against &{Name:functional-559559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-559559 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:24:36.687646  755193 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:24:36.689865  755193 out.go:201] 
	W0819 19:24:36.691583  755193 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 19:24:36.693239  755193 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-559559 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-559559 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-cz96r" [eeb54c7d-6fd8-476f-9e54-6838fcaaf076] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-cz96r" [eeb54c7d-6fd8-476f-9e54-6838fcaaf076] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003851866s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31909
functional_test.go:1675: http://192.168.49.2:31909: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-cz96r

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31909
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [226168f5-4d01-4989-b1c6-c44f1d108e2f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004452931s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-559559 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-559559 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-559559 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-559559 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2d98700b-f03d-49fa-ad93-5878f381b56c] Pending
helpers_test.go:344: "sp-pod" [2d98700b-f03d-49fa-ad93-5878f381b56c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2d98700b-f03d-49fa-ad93-5878f381b56c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003565728s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-559559 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-559559 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-559559 delete -f testdata/storage-provisioner/pod.yaml: (1.598257043s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-559559 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5006bd8c-3f55-4bac-9af8-e77e5fab5ded] Pending
helpers_test.go:344: "sp-pod" [5006bd8c-3f55-4bac-9af8-e77e5fab5ded] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.005321932s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-559559 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh -n functional-559559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 cp functional-559559:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4287135908/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh -n functional-559559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh -n functional-559559 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/719052/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo cat /etc/test/nested/copy/719052/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/719052.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo cat /etc/ssl/certs/719052.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/719052.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo cat /usr/share/ca-certificates/719052.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7190522.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo cat /etc/ssl/certs/7190522.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7190522.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo cat /usr/share/ca-certificates/7190522.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-559559 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-559559 ssh "sudo systemctl is-active docker": exit status 1 (409.863546ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-559559 ssh "sudo systemctl is-active crio": exit status 1 (348.805531ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 version -o=json --components
E0819 19:24:38.040387  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:24:38.047331  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:24:38.058780  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:24:38.080272  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:24:38.121664  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:24:38.203025  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 version -o=json --components: (1.448618057s)
--- PASS: TestFunctional/parallel/Version/components (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-559559 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-559559
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-559559
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-559559 image ls --format short --alsologtostderr:
I0819 19:24:38.990808  755730 out.go:345] Setting OutFile to fd 1 ...
I0819 19:24:38.990979  755730 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:24:38.990986  755730 out.go:358] Setting ErrFile to fd 2...
I0819 19:24:38.990990  755730 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:24:38.991320  755730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
I0819 19:24:38.992592  755730 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 19:24:38.992767  755730 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 19:24:38.993449  755730 cli_runner.go:164] Run: docker container inspect functional-559559 --format={{.State.Status}}
I0819 19:24:39.017044  755730 ssh_runner.go:195] Run: systemctl --version
I0819 19:24:39.017109  755730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-559559
I0819 19:24:39.045066  755730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/functional-559559/id_rsa Username:docker}
I0819 19:24:39.150414  755730 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image ls --format table --alsologtostderr
E0819 19:24:43.172360  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-559559 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| docker.io/library/minikube-local-cache-test | functional-559559  | sha256:d424b5 | 989B   |
| docker.io/library/nginx                     | latest             | sha256:a9dfdb | 67.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-559559  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| localhost/my-image                          | functional-559559  | sha256:67ba03 | 831kB  |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-559559 image ls --format table --alsologtostderr:
I0819 19:24:43.162045  756100 out.go:345] Setting OutFile to fd 1 ...
I0819 19:24:43.162274  756100 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:24:43.162302  756100 out.go:358] Setting ErrFile to fd 2...
I0819 19:24:43.162322  756100 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:24:43.162620  756100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
I0819 19:24:43.163344  756100 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 19:24:43.163562  756100 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 19:24:43.164115  756100 cli_runner.go:164] Run: docker container inspect functional-559559 --format={{.State.Status}}
I0819 19:24:43.186648  756100 ssh_runner.go:195] Run: systemctl --version
I0819 19:24:43.186700  756100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-559559
I0819 19:24:43.211411  756100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/functional-559559/id_rsa Username:docker}
I0819 19:24:43.317255  756100 ssh_runner.go:195] Run: sudo crictl images --output json
2024/08/19 19:24:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-559559 image ls --format json --alsologtostderr:
[{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
"],"repoTags":["docker.io/library/nginx:latest"],"size":"67690150"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19627164"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"4532467
5"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-559559"],"size":"2173567"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23
aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:d424b5bade23fda40c7546d81ecd37feba448f37db0f4a2fa8d90faae16844b6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-559559"],"size":"989"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:67ba030f8cc7ce090efc054d0e6cedc2d6f1dd4323836894a868fe9d94d92264","repoDigests":[],"repoTags":["localhost/my-image:functional-559559"],"size":"830618"},{"id":"sha25
6:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-559559 image ls --format json --alsologtostderr:
I0819 19:24:42.842743  756030 out.go:345] Setting OutFile to fd 1 ...
I0819 19:24:42.843031  756030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:24:42.843072  756030 out.go:358] Setting ErrFile to fd 2...
I0819 19:24:42.843095  756030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:24:42.843375  756030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
I0819 19:24:42.844106  756030 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 19:24:42.844303  756030 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 19:24:42.844915  756030 cli_runner.go:164] Run: docker container inspect functional-559559 --format={{.State.Status}}
I0819 19:24:42.864864  756030 ssh_runner.go:195] Run: systemctl --version
I0819 19:24:42.864928  756030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-559559
I0819 19:24:42.888092  756030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/functional-559559/id_rsa Username:docker}
I0819 19:24:42.989359  756030 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image ls --format yaml --alsologtostderr
E0819 19:24:39.329661  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-559559 image ls --format yaml --alsologtostderr:
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-559559
size: "2173567"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:d424b5bade23fda40c7546d81ecd37feba448f37db0f4a2fa8d90faae16844b6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-559559
size: "989"
- id: sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
repoTags:
- docker.io/library/nginx:latest
size: "67690150"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-559559 image ls --format yaml --alsologtostderr:
I0819 19:24:39.254420  755760 out.go:345] Setting OutFile to fd 1 ...
I0819 19:24:39.254645  755760 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:24:39.254674  755760 out.go:358] Setting ErrFile to fd 2...
I0819 19:24:39.254694  755760 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:24:39.254956  755760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
I0819 19:24:39.255690  755760 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 19:24:39.255872  755760 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 19:24:39.256468  755760 cli_runner.go:164] Run: docker container inspect functional-559559 --format={{.State.Status}}
I0819 19:24:39.274965  755760 ssh_runner.go:195] Run: systemctl --version
I0819 19:24:39.275020  755760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-559559
I0819 19:24:39.293484  755760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/functional-559559/id_rsa Username:docker}
I0819 19:24:39.391108  755760 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-559559 ssh pgrep buildkitd: exit status 1 (428.133176ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image build -t localhost/my-image:functional-559559 testdata/build --alsologtostderr
E0819 19:24:40.610987  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 image build -t localhost/my-image:functional-559559 testdata/build --alsologtostderr: (2.600140836s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-559559 image build -t localhost/my-image:functional-559559 testdata/build --alsologtostderr:
I0819 19:24:39.929234  755849 out.go:345] Setting OutFile to fd 1 ...
I0819 19:24:39.930321  755849 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:24:39.930365  755849 out.go:358] Setting ErrFile to fd 2...
I0819 19:24:39.930388  755849 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:24:39.930668  755849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
I0819 19:24:39.931392  755849 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 19:24:39.932132  755849 config.go:182] Loaded profile config "functional-559559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 19:24:39.932690  755849 cli_runner.go:164] Run: docker container inspect functional-559559 --format={{.State.Status}}
I0819 19:24:39.951884  755849 ssh_runner.go:195] Run: systemctl --version
I0819 19:24:39.951943  755849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-559559
I0819 19:24:39.978513  755849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/functional-559559/id_rsa Username:docker}
I0819 19:24:40.118757  755849 build_images.go:161] Building image from path: /tmp/build.139950701.tar
I0819 19:24:40.118835  755849 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 19:24:40.134223  755849 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.139950701.tar
I0819 19:24:40.138712  755849 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.139950701.tar: stat -c "%s %y" /var/lib/minikube/build/build.139950701.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.139950701.tar': No such file or directory
I0819 19:24:40.138751  755849 ssh_runner.go:362] scp /tmp/build.139950701.tar --> /var/lib/minikube/build/build.139950701.tar (3072 bytes)
I0819 19:24:40.182115  755849 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.139950701
I0819 19:24:40.198152  755849 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.139950701 -xf /var/lib/minikube/build/build.139950701.tar
I0819 19:24:40.209508  755849 containerd.go:394] Building image: /var/lib/minikube/build/build.139950701
I0819 19:24:40.209702  755849 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.139950701 --local dockerfile=/var/lib/minikube/build/build.139950701 --output type=image,name=localhost/my-image:functional-559559
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:c9a2a5868730960419b1752fffd59e514b7e8fe47d76ffe8887c786c7e0b2a08 0.0s done
#8 exporting config sha256:67ba030f8cc7ce090efc054d0e6cedc2d6f1dd4323836894a868fe9d94d92264 done
#8 naming to localhost/my-image:functional-559559 done
#8 DONE 0.2s
I0819 19:24:42.434900  755849 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.139950701 --local dockerfile=/var/lib/minikube/build/build.139950701 --output type=image,name=localhost/my-image:functional-559559: (2.225131447s)
I0819 19:24:42.434968  755849 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.139950701
I0819 19:24:42.446271  755849 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.139950701.tar
I0819 19:24:42.458344  755849 build_images.go:217] Built localhost/my-image:functional-559559 from /tmp/build.139950701.tar
I0819 19:24:42.458423  755849 build_images.go:133] succeeded building to: functional-559559
I0819 19:24:42.458442  755849 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-559559
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 update-context --alsologtostderr -v=2
E0819 19:24:38.365096  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 update-context --alsologtostderr -v=2
E0819 19:24:38.687648  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image load --daemon kicbase/echo-server:functional-559559 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 image load --daemon kicbase/echo-server:functional-559559 --alsologtostderr: (1.250801482s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image load --daemon kicbase/echo-server:functional-559559 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 image load --daemon kicbase/echo-server:functional-559559 --alsologtostderr: (1.297156132s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-559559 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-559559 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-r4qsv" [d8580523-a834-4473-831c-4197d44636af] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-r4qsv" [d8580523-a834-4473-831c-4197d44636af] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003586526s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-559559
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image load --daemon kicbase/echo-server:functional-559559 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-559559 image load --daemon kicbase/echo-server:functional-559559 --alsologtostderr: (1.13136116s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image save kicbase/echo-server:functional-559559 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image rm kicbase/echo-server:functional-559559 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-559559
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 image save --daemon kicbase/echo-server:functional-559559 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-559559
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-559559 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-559559 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-559559 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 751994: os: process already finished
helpers_test.go:502: unable to terminate pid 751878: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-559559 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-559559 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-559559 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b4116306-81c7-4612-9e57-a129812f07d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b4116306-81c7-4612-9e57-a129812f07d1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004732536s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 service list -o json
functional_test.go:1494: Took "345.751562ms" to run "out/minikube-linux-arm64 -p functional-559559 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31386
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31386
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-559559 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.105.79 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-559559 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "329.766331ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "55.797479ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "342.407715ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "56.222207ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdany-port1331347799/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724095464079604077" to /tmp/TestFunctionalparallelMountCmdany-port1331347799/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724095464079604077" to /tmp/TestFunctionalparallelMountCmdany-port1331347799/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724095464079604077" to /tmp/TestFunctionalparallelMountCmdany-port1331347799/001/test-1724095464079604077
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-559559 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.720145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 19:24 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 19:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 19:24 test-1724095464079604077
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh cat /mount-9p/test-1724095464079604077
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-559559 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c8368fa5-2e18-46b0-9962-187454c16b3b] Pending
helpers_test.go:344: "busybox-mount" [c8368fa5-2e18-46b0-9962-187454c16b3b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c8368fa5-2e18-46b0-9962-187454c16b3b] Running
helpers_test.go:344: "busybox-mount" [c8368fa5-2e18-46b0-9962-187454c16b3b] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c8368fa5-2e18-46b0-9962-187454c16b3b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003738204s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-559559 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdany-port1331347799/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdspecific-port4035258790/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdspecific-port4035258790/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-559559 ssh "sudo umount -f /mount-9p": exit status 1 (283.858655ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-559559 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdspecific-port4035258790/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2550167192/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2550167192/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2550167192/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-559559 ssh "findmnt -T" /mount1: exit status 1 (522.439196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-559559 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-559559 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2550167192/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2550167192/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-559559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2550167192/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-559559
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-559559
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-559559
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (118.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-655631 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 19:24:48.293674  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:24:58.535389  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:25:19.017255  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:25:59.979728  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-655631 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m57.189256401s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (118.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-655631 -- rollout status deployment/busybox: (27.65447672s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-79lbg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-b6g4v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-qmjlp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-79lbg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-b6g4v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-qmjlp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-79lbg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-b6g4v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-qmjlp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-79lbg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-79lbg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-b6g4v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-b6g4v -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-qmjlp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-655631 -- exec busybox-7dff88458-qmjlp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-655631 -v=7 --alsologtostderr
E0819 19:27:21.901065  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-655631 -v=7 --alsologtostderr: (21.997202242s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr: (1.060069466s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-655631 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-655631 status --output json -v=7 --alsologtostderr: (1.137181306s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp testdata/cp-test.txt ha-655631:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1132386147/001/cp-test_ha-655631.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631:/home/docker/cp-test.txt ha-655631-m02:/home/docker/cp-test_ha-655631_ha-655631-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m02 "sudo cat /home/docker/cp-test_ha-655631_ha-655631-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631:/home/docker/cp-test.txt ha-655631-m03:/home/docker/cp-test_ha-655631_ha-655631-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m03 "sudo cat /home/docker/cp-test_ha-655631_ha-655631-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631:/home/docker/cp-test.txt ha-655631-m04:/home/docker/cp-test_ha-655631_ha-655631-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m04 "sudo cat /home/docker/cp-test_ha-655631_ha-655631-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp testdata/cp-test.txt ha-655631-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1132386147/001/cp-test_ha-655631-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m02:/home/docker/cp-test.txt ha-655631:/home/docker/cp-test_ha-655631-m02_ha-655631.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631 "sudo cat /home/docker/cp-test_ha-655631-m02_ha-655631.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m02:/home/docker/cp-test.txt ha-655631-m03:/home/docker/cp-test_ha-655631-m02_ha-655631-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m03 "sudo cat /home/docker/cp-test_ha-655631-m02_ha-655631-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m02:/home/docker/cp-test.txt ha-655631-m04:/home/docker/cp-test_ha-655631-m02_ha-655631-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m04 "sudo cat /home/docker/cp-test_ha-655631-m02_ha-655631-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp testdata/cp-test.txt ha-655631-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1132386147/001/cp-test_ha-655631-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m03:/home/docker/cp-test.txt ha-655631:/home/docker/cp-test_ha-655631-m03_ha-655631.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631 "sudo cat /home/docker/cp-test_ha-655631-m03_ha-655631.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m03:/home/docker/cp-test.txt ha-655631-m02:/home/docker/cp-test_ha-655631-m03_ha-655631-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m02 "sudo cat /home/docker/cp-test_ha-655631-m03_ha-655631-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m03:/home/docker/cp-test.txt ha-655631-m04:/home/docker/cp-test_ha-655631-m03_ha-655631-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m04 "sudo cat /home/docker/cp-test_ha-655631-m03_ha-655631-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp testdata/cp-test.txt ha-655631-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1132386147/001/cp-test_ha-655631-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m04:/home/docker/cp-test.txt ha-655631:/home/docker/cp-test_ha-655631-m04_ha-655631.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631 "sudo cat /home/docker/cp-test_ha-655631-m04_ha-655631.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m04:/home/docker/cp-test.txt ha-655631-m02:/home/docker/cp-test_ha-655631-m04_ha-655631-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m02 "sudo cat /home/docker/cp-test_ha-655631-m04_ha-655631-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 cp ha-655631-m04:/home/docker/cp-test.txt ha-655631-m03:/home/docker/cp-test_ha-655631-m04_ha-655631-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 ssh -n ha-655631-m03 "sudo cat /home/docker/cp-test_ha-655631-m04_ha-655631-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-655631 node stop m02 -v=7 --alsologtostderr: (12.134100683s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr: exit status 7 (782.348441ms)

                                                
                                                
-- stdout --
	ha-655631
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-655631-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-655631-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-655631-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:28:13.410363  772187 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:28:13.410506  772187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:28:13.410518  772187 out.go:358] Setting ErrFile to fd 2...
	I0819 19:28:13.410523  772187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:28:13.410749  772187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
	I0819 19:28:13.411058  772187 out.go:352] Setting JSON to false
	I0819 19:28:13.411133  772187 mustload.go:65] Loading cluster: ha-655631
	I0819 19:28:13.411220  772187 notify.go:220] Checking for updates...
	I0819 19:28:13.411562  772187 config.go:182] Loaded profile config "ha-655631": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 19:28:13.411584  772187 status.go:255] checking status of ha-655631 ...
	I0819 19:28:13.412094  772187 cli_runner.go:164] Run: docker container inspect ha-655631 --format={{.State.Status}}
	I0819 19:28:13.437132  772187 status.go:330] ha-655631 host status = "Running" (err=<nil>)
	I0819 19:28:13.437176  772187 host.go:66] Checking if "ha-655631" exists ...
	I0819 19:28:13.437501  772187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-655631
	I0819 19:28:13.462991  772187 host.go:66] Checking if "ha-655631" exists ...
	I0819 19:28:13.463301  772187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:28:13.463359  772187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-655631
	I0819 19:28:13.486672  772187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/ha-655631/id_rsa Username:docker}
	I0819 19:28:13.587295  772187 ssh_runner.go:195] Run: systemctl --version
	I0819 19:28:13.592929  772187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:28:13.606565  772187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:28:13.677764  772187 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-19 19:28:13.667618437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:28:13.678352  772187 kubeconfig.go:125] found "ha-655631" server: "https://192.168.49.254:8443"
	I0819 19:28:13.678388  772187 api_server.go:166] Checking apiserver status ...
	I0819 19:28:13.678437  772187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:28:13.691299  772187 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1492/cgroup
	I0819 19:28:13.703045  772187 api_server.go:182] apiserver freezer: "12:freezer:/docker/e603b7a47381f7013740cf067b2d6240b27e3db95202546c566f13358d6cfcc2/kubepods/burstable/pod493ff7e94220bba97b834d3edca96222/8025b0f499749304a5e8edc65a3e10573a79008dcac729fe2a0eb3cd9ed10b8b"
	I0819 19:28:13.703120  772187 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e603b7a47381f7013740cf067b2d6240b27e3db95202546c566f13358d6cfcc2/kubepods/burstable/pod493ff7e94220bba97b834d3edca96222/8025b0f499749304a5e8edc65a3e10573a79008dcac729fe2a0eb3cd9ed10b8b/freezer.state
	I0819 19:28:13.712700  772187 api_server.go:204] freezer state: "THAWED"
	I0819 19:28:13.712730  772187 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 19:28:13.722398  772187 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 19:28:13.722429  772187 status.go:422] ha-655631 apiserver status = Running (err=<nil>)
	I0819 19:28:13.722442  772187 status.go:257] ha-655631 status: &{Name:ha-655631 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:28:13.722460  772187 status.go:255] checking status of ha-655631-m02 ...
	I0819 19:28:13.722792  772187 cli_runner.go:164] Run: docker container inspect ha-655631-m02 --format={{.State.Status}}
	I0819 19:28:13.740831  772187 status.go:330] ha-655631-m02 host status = "Stopped" (err=<nil>)
	I0819 19:28:13.740856  772187 status.go:343] host is not running, skipping remaining checks
	I0819 19:28:13.740865  772187 status.go:257] ha-655631-m02 status: &{Name:ha-655631-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:28:13.740886  772187 status.go:255] checking status of ha-655631-m03 ...
	I0819 19:28:13.741207  772187 cli_runner.go:164] Run: docker container inspect ha-655631-m03 --format={{.State.Status}}
	I0819 19:28:13.760440  772187 status.go:330] ha-655631-m03 host status = "Running" (err=<nil>)
	I0819 19:28:13.760469  772187 host.go:66] Checking if "ha-655631-m03" exists ...
	I0819 19:28:13.760782  772187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-655631-m03
	I0819 19:28:13.779206  772187 host.go:66] Checking if "ha-655631-m03" exists ...
	I0819 19:28:13.779525  772187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:28:13.779575  772187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-655631-m03
	I0819 19:28:13.801773  772187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/ha-655631-m03/id_rsa Username:docker}
	I0819 19:28:13.906875  772187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:28:13.920303  772187 kubeconfig.go:125] found "ha-655631" server: "https://192.168.49.254:8443"
	I0819 19:28:13.920335  772187 api_server.go:166] Checking apiserver status ...
	I0819 19:28:13.920378  772187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:28:13.933033  772187 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup
	I0819 19:28:13.943192  772187 api_server.go:182] apiserver freezer: "12:freezer:/docker/c35fbd7af2e65ac184eda81ac035e2d1625421b28087e26d0fb706eda5fe6499/kubepods/burstable/pod224b213352747febb9072c8547f0109e/f4332fafd7a9e13b18c3923673f6538464aae102cce3cf9ec97050d014670070"
	I0819 19:28:13.943285  772187 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c35fbd7af2e65ac184eda81ac035e2d1625421b28087e26d0fb706eda5fe6499/kubepods/burstable/pod224b213352747febb9072c8547f0109e/f4332fafd7a9e13b18c3923673f6538464aae102cce3cf9ec97050d014670070/freezer.state
	I0819 19:28:13.953433  772187 api_server.go:204] freezer state: "THAWED"
	I0819 19:28:13.953468  772187 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 19:28:13.962321  772187 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 19:28:13.962350  772187 status.go:422] ha-655631-m03 apiserver status = Running (err=<nil>)
	I0819 19:28:13.962386  772187 status.go:257] ha-655631-m03 status: &{Name:ha-655631-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:28:13.962410  772187 status.go:255] checking status of ha-655631-m04 ...
	I0819 19:28:13.962746  772187 cli_runner.go:164] Run: docker container inspect ha-655631-m04 --format={{.State.Status}}
	I0819 19:28:13.980441  772187 status.go:330] ha-655631-m04 host status = "Running" (err=<nil>)
	I0819 19:28:13.980470  772187 host.go:66] Checking if "ha-655631-m04" exists ...
	I0819 19:28:13.980788  772187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-655631-m04
	I0819 19:28:13.998574  772187 host.go:66] Checking if "ha-655631-m04" exists ...
	I0819 19:28:13.998925  772187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:28:13.998976  772187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-655631-m04
	I0819 19:28:14.022996  772187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33563 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/ha-655631-m04/id_rsa Username:docker}
	I0819 19:28:14.119247  772187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:28:14.131384  772187 status.go:257] ha-655631-m04 status: &{Name:ha-655631-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-655631 node start m02 -v=7 --alsologtostderr: (17.124452942s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr: (1.030266775s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.166890545s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (131.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-655631 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-655631 -v=7 --alsologtostderr
E0819 19:28:59.816395  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:28:59.825079  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:28:59.836436  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:28:59.858201  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:28:59.899550  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:28:59.980899  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:29:00.142369  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:29:00.463746  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:29:01.105166  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:29:02.386816  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:29:04.949137  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:29:10.071071  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-655631 -v=7 --alsologtostderr: (37.406987706s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-655631 --wait=true -v=7 --alsologtostderr
E0819 19:29:20.312766  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:29:38.039175  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:29:40.794069  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:30:05.742313  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:30:21.756483  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-655631 --wait=true -v=7 --alsologtostderr: (1m34.319735755s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-655631
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (131.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-655631 node delete m03 -v=7 --alsologtostderr: (8.992027022s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-655631 stop -v=7 --alsologtostderr: (36.028081255s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr: exit status 7 (112.236588ms)

                                                
                                                
-- stdout --
	ha-655631
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-655631-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-655631-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:31:32.645103  786245 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:31:32.645326  786245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:31:32.645354  786245 out.go:358] Setting ErrFile to fd 2...
	I0819 19:31:32.645374  786245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:31:32.645695  786245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
	I0819 19:31:32.645934  786245 out.go:352] Setting JSON to false
	I0819 19:31:32.646007  786245 mustload.go:65] Loading cluster: ha-655631
	I0819 19:31:32.646084  786245 notify.go:220] Checking for updates...
	I0819 19:31:32.646503  786245 config.go:182] Loaded profile config "ha-655631": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 19:31:32.646524  786245 status.go:255] checking status of ha-655631 ...
	I0819 19:31:32.647097  786245 cli_runner.go:164] Run: docker container inspect ha-655631 --format={{.State.Status}}
	I0819 19:31:32.665340  786245 status.go:330] ha-655631 host status = "Stopped" (err=<nil>)
	I0819 19:31:32.665364  786245 status.go:343] host is not running, skipping remaining checks
	I0819 19:31:32.665373  786245 status.go:257] ha-655631 status: &{Name:ha-655631 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:31:32.665405  786245 status.go:255] checking status of ha-655631-m02 ...
	I0819 19:31:32.665750  786245 cli_runner.go:164] Run: docker container inspect ha-655631-m02 --format={{.State.Status}}
	I0819 19:31:32.690641  786245 status.go:330] ha-655631-m02 host status = "Stopped" (err=<nil>)
	I0819 19:31:32.690664  786245 status.go:343] host is not running, skipping remaining checks
	I0819 19:31:32.690673  786245 status.go:257] ha-655631-m02 status: &{Name:ha-655631-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:31:32.690696  786245 status.go:255] checking status of ha-655631-m04 ...
	I0819 19:31:32.691030  786245 cli_runner.go:164] Run: docker container inspect ha-655631-m04 --format={{.State.Status}}
	I0819 19:31:32.707451  786245 status.go:330] ha-655631-m04 host status = "Stopped" (err=<nil>)
	I0819 19:31:32.707472  786245 status.go:343] host is not running, skipping remaining checks
	I0819 19:31:32.707479  786245 status.go:257] ha-655631-m04 status: &{Name:ha-655631-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (64.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-655631 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 19:31:43.677822  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-655631 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m3.94635463s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (64.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-655631 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-655631 --control-plane -v=7 --alsologtostderr: (40.192581277s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-655631 status -v=7 --alsologtostderr: (1.011503537s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-884196 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0819 19:33:59.815104  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-884196 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (50.692467465s)
--- PASS: TestJSONOutput/start/Command (50.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-884196 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-884196 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-884196 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-884196 --output=json --user=testUser: (5.799293294s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-278038 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-278038 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.375908ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1f2c8664-0451-4eb7-8808-bbae558aefb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-278038] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"27e56a72-cd31-4519-b976-c548f2113b5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19468"}}
	{"specversion":"1.0","id":"b5d4651d-d95a-4587-922d-20fc4546bb9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9ab37825-3c1b-4078-b6c6-2fe5a225cb24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig"}}
	{"specversion":"1.0","id":"50e19cbe-57ff-481f-a711-e01a7dfda1d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube"}}
	{"specversion":"1.0","id":"c36f2a48-d8eb-4608-82df-58b52070e006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"30e683e4-0126-4cd4-9ce9-4ee46cc55639","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a044ce3d-0250-4f8f-99db-51bf03eaedb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-278038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-278038
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-045655 --network=
E0819 19:34:38.039938  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-045655 --network=: (38.854384859s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-045655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-045655
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-045655: (2.083267164s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.96s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-111808 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-111808 --network=bridge: (31.321471833s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-111808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-111808
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-111808: (2.005071734s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.35s)

                                                
                                    
x
+
TestKicExistingNetwork (36.95s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-526447 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-526447 --network=existing-network: (34.716544779s)
helpers_test.go:175: Cleaning up "existing-network-526447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-526447
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-526447: (2.043311136s)
--- PASS: TestKicExistingNetwork (36.95s)

                                                
                                    
x
+
TestKicCustomSubnet (34.12s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-995177 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-995177 --subnet=192.168.60.0/24: (32.011542494s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-995177 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-995177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-995177
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-995177: (2.082386806s)
--- PASS: TestKicCustomSubnet (34.12s)

                                                
                                    
x
+
TestKicStaticIP (34.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-733610 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-733610 --static-ip=192.168.200.200: (32.463575468s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-733610 ip
helpers_test.go:175: Cleaning up "static-ip-733610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-733610
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-733610: (2.102468348s)
--- PASS: TestKicStaticIP (34.72s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-785123 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-785123 --driver=docker  --container-runtime=containerd: (31.93777704s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-788446 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-788446 --driver=docker  --container-runtime=containerd: (33.681618448s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-785123
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-788446
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-788446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-788446
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-788446: (2.07197212s)
helpers_test.go:175: Cleaning up "first-785123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-785123
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-785123: (2.247493415s)
--- PASS: TestMinikubeProfile (71.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-765422 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-765422 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.292641781s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-765422 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-778632 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-778632 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.949750245s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-778632 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-765422 --alsologtostderr -v=5
E0819 19:38:59.815487  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-765422 --alsologtostderr -v=5: (1.599314695s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-778632 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-778632
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-778632: (1.263261465s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-778632
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-778632: (7.250379719s)
--- PASS: TestMountStart/serial/RestartStopped (8.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-778632 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-214284 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 19:39:38.039880  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-214284 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m9.395083223s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-214284 -- rollout status deployment/busybox: (16.105559795s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- exec busybox-7dff88458-plcmb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- exec busybox-7dff88458-q56l2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- exec busybox-7dff88458-plcmb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- exec busybox-7dff88458-q56l2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- exec busybox-7dff88458-plcmb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- exec busybox-7dff88458-q56l2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- exec busybox-7dff88458-plcmb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- exec busybox-7dff88458-plcmb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- exec busybox-7dff88458-q56l2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214284 -- exec busybox-7dff88458-q56l2 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-214284 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-214284 -v 3 --alsologtostderr: (15.299787123s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-214284 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp testdata/cp-test.txt multinode-214284:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp multinode-214284:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3107824571/001/cp-test_multinode-214284.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp multinode-214284:/home/docker/cp-test.txt multinode-214284-m02:/home/docker/cp-test_multinode-214284_multinode-214284-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m02 "sudo cat /home/docker/cp-test_multinode-214284_multinode-214284-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp multinode-214284:/home/docker/cp-test.txt multinode-214284-m03:/home/docker/cp-test_multinode-214284_multinode-214284-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284 "sudo cat /home/docker/cp-test.txt"
E0819 19:41:01.104354  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m03 "sudo cat /home/docker/cp-test_multinode-214284_multinode-214284-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp testdata/cp-test.txt multinode-214284-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp multinode-214284-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3107824571/001/cp-test_multinode-214284-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp multinode-214284-m02:/home/docker/cp-test.txt multinode-214284:/home/docker/cp-test_multinode-214284-m02_multinode-214284.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284 "sudo cat /home/docker/cp-test_multinode-214284-m02_multinode-214284.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp multinode-214284-m02:/home/docker/cp-test.txt multinode-214284-m03:/home/docker/cp-test_multinode-214284-m02_multinode-214284-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m03 "sudo cat /home/docker/cp-test_multinode-214284-m02_multinode-214284-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp testdata/cp-test.txt multinode-214284-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp multinode-214284-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3107824571/001/cp-test_multinode-214284-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp multinode-214284-m03:/home/docker/cp-test.txt multinode-214284:/home/docker/cp-test_multinode-214284-m03_multinode-214284.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284 "sudo cat /home/docker/cp-test_multinode-214284-m03_multinode-214284.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 cp multinode-214284-m03:/home/docker/cp-test.txt multinode-214284-m02:/home/docker/cp-test_multinode-214284-m03_multinode-214284-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 ssh -n multinode-214284-m02 "sudo cat /home/docker/cp-test_multinode-214284-m03_multinode-214284-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-214284 node stop m03: (1.218366788s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-214284 status: exit status 7 (506.775587ms)

                                                
                                                
-- stdout --
	multinode-214284
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-214284-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-214284-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-214284 status --alsologtostderr: exit status 7 (506.649845ms)

                                                
                                                
-- stdout --
	multinode-214284
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-214284-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-214284-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:41:09.447290  839673 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:41:09.447425  839673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:41:09.447431  839673 out.go:358] Setting ErrFile to fd 2...
	I0819 19:41:09.447435  839673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:41:09.447701  839673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
	I0819 19:41:09.447921  839673 out.go:352] Setting JSON to false
	I0819 19:41:09.447981  839673 mustload.go:65] Loading cluster: multinode-214284
	I0819 19:41:09.448063  839673 notify.go:220] Checking for updates...
	I0819 19:41:09.448954  839673 config.go:182] Loaded profile config "multinode-214284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 19:41:09.448980  839673 status.go:255] checking status of multinode-214284 ...
	I0819 19:41:09.449551  839673 cli_runner.go:164] Run: docker container inspect multinode-214284 --format={{.State.Status}}
	I0819 19:41:09.466997  839673 status.go:330] multinode-214284 host status = "Running" (err=<nil>)
	I0819 19:41:09.467023  839673 host.go:66] Checking if "multinode-214284" exists ...
	I0819 19:41:09.467332  839673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-214284
	I0819 19:41:09.492891  839673 host.go:66] Checking if "multinode-214284" exists ...
	I0819 19:41:09.493310  839673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:41:09.493369  839673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-214284
	I0819 19:41:09.510212  839673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33668 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/multinode-214284/id_rsa Username:docker}
	I0819 19:41:09.602941  839673 ssh_runner.go:195] Run: systemctl --version
	I0819 19:41:09.607464  839673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:41:09.619426  839673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:41:09.681217  839673 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-19 19:41:09.671125065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:41:09.681885  839673 kubeconfig.go:125] found "multinode-214284" server: "https://192.168.67.2:8443"
	I0819 19:41:09.681924  839673 api_server.go:166] Checking apiserver status ...
	I0819 19:41:09.681981  839673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:41:09.693496  839673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1383/cgroup
	I0819 19:41:09.704013  839673 api_server.go:182] apiserver freezer: "12:freezer:/docker/f96f83b32d31ef4a9bf6c20c00b7f77e18a022e8b22531c451d960cb828777c0/kubepods/burstable/podd05b8129e2b2dbff398cf9f5737271c4/3a3e3a896410fa19ee43b61148eaa3cac265365501bddd3d7ed9fcb1352046cf"
	I0819 19:41:09.704095  839673 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f96f83b32d31ef4a9bf6c20c00b7f77e18a022e8b22531c451d960cb828777c0/kubepods/burstable/podd05b8129e2b2dbff398cf9f5737271c4/3a3e3a896410fa19ee43b61148eaa3cac265365501bddd3d7ed9fcb1352046cf/freezer.state
	I0819 19:41:09.713096  839673 api_server.go:204] freezer state: "THAWED"
	I0819 19:41:09.713130  839673 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0819 19:41:09.722911  839673 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0819 19:41:09.722943  839673 status.go:422] multinode-214284 apiserver status = Running (err=<nil>)
	I0819 19:41:09.722964  839673 status.go:257] multinode-214284 status: &{Name:multinode-214284 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:41:09.723020  839673 status.go:255] checking status of multinode-214284-m02 ...
	I0819 19:41:09.723376  839673 cli_runner.go:164] Run: docker container inspect multinode-214284-m02 --format={{.State.Status}}
	I0819 19:41:09.744613  839673 status.go:330] multinode-214284-m02 host status = "Running" (err=<nil>)
	I0819 19:41:09.744642  839673 host.go:66] Checking if "multinode-214284-m02" exists ...
	I0819 19:41:09.744963  839673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-214284-m02
	I0819 19:41:09.763887  839673 host.go:66] Checking if "multinode-214284-m02" exists ...
	I0819 19:41:09.764244  839673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:41:09.764294  839673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-214284-m02
	I0819 19:41:09.781958  839673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33673 SSHKeyPath:/home/jenkins/minikube-integration/19468-713648/.minikube/machines/multinode-214284-m02/id_rsa Username:docker}
	I0819 19:41:09.875472  839673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:41:09.887556  839673 status.go:257] multinode-214284-m02 status: &{Name:multinode-214284-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:41:09.887606  839673 status.go:255] checking status of multinode-214284-m03 ...
	I0819 19:41:09.887918  839673 cli_runner.go:164] Run: docker container inspect multinode-214284-m03 --format={{.State.Status}}
	I0819 19:41:09.905899  839673 status.go:330] multinode-214284-m03 host status = "Stopped" (err=<nil>)
	I0819 19:41:09.905925  839673 status.go:343] host is not running, skipping remaining checks
	I0819 19:41:09.905934  839673 status.go:257] multinode-214284-m03 status: &{Name:multinode-214284-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-214284 node start m03 -v=7 --alsologtostderr: (8.852114703s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (97.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-214284
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-214284
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-214284: (25.026643757s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-214284 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-214284 --wait=true -v=8 --alsologtostderr: (1m12.670207629s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-214284
--- PASS: TestMultiNode/serial/RestartKeepsNodes (97.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-214284 node delete m03: (4.91379816s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-214284 stop: (23.788366503s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-214284 status: exit status 7 (85.323257ms)

                                                
                                                
-- stdout --
	multinode-214284
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-214284-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-214284 status --alsologtostderr: exit status 7 (86.692588ms)

                                                
                                                
-- stdout --
	multinode-214284
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-214284-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:43:26.867666  848154 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:43:26.867861  848154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:43:26.867889  848154 out.go:358] Setting ErrFile to fd 2...
	I0819 19:43:26.867913  848154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:43:26.868152  848154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
	I0819 19:43:26.868388  848154 out.go:352] Setting JSON to false
	I0819 19:43:26.868458  848154 mustload.go:65] Loading cluster: multinode-214284
	I0819 19:43:26.868524  848154 notify.go:220] Checking for updates...
	I0819 19:43:26.868879  848154 config.go:182] Loaded profile config "multinode-214284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 19:43:26.868901  848154 status.go:255] checking status of multinode-214284 ...
	I0819 19:43:26.869398  848154 cli_runner.go:164] Run: docker container inspect multinode-214284 --format={{.State.Status}}
	I0819 19:43:26.886949  848154 status.go:330] multinode-214284 host status = "Stopped" (err=<nil>)
	I0819 19:43:26.886968  848154 status.go:343] host is not running, skipping remaining checks
	I0819 19:43:26.886975  848154 status.go:257] multinode-214284 status: &{Name:multinode-214284 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:43:26.886998  848154 status.go:255] checking status of multinode-214284-m02 ...
	I0819 19:43:26.887311  848154 cli_runner.go:164] Run: docker container inspect multinode-214284-m02 --format={{.State.Status}}
	I0819 19:43:26.911213  848154 status.go:330] multinode-214284-m02 host status = "Stopped" (err=<nil>)
	I0819 19:43:26.911234  848154 status.go:343] host is not running, skipping remaining checks
	I0819 19:43:26.911241  848154 status.go:257] multinode-214284-m02 status: &{Name:multinode-214284-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-214284 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 19:43:59.814994  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-214284 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.19306396s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214284 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-214284
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-214284-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-214284-m02 --driver=docker  --container-runtime=containerd: exit status 14 (85.05152ms)

                                                
                                                
-- stdout --
	* [multinode-214284-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-214284-m02' is duplicated with machine name 'multinode-214284-m02' in profile 'multinode-214284'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-214284-m03 --driver=docker  --container-runtime=containerd
E0819 19:44:38.039881  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-214284-m03 --driver=docker  --container-runtime=containerd: (30.705757292s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-214284
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-214284: exit status 80 (325.396314ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-214284 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-214284-m03 already exists in multinode-214284-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-214284-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-214284-m03: (1.966890905s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.14s)

                                                
                                    
x
+
TestPreload (111.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-197844 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0819 19:45:22.880686  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-197844 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m10.935967193s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-197844 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-197844 image pull gcr.io/k8s-minikube/busybox: (1.272185267s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-197844
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-197844: (12.090852143s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-197844 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-197844 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (24.236513931s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-197844 image list
helpers_test.go:175: Cleaning up "test-preload-197844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-197844
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-197844: (2.603014517s)
--- PASS: TestPreload (111.57s)

                                                
                                    
x
+
TestScheduledStopUnix (105.74s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-479028 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-479028 --memory=2048 --driver=docker  --container-runtime=containerd: (29.85146459s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-479028 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-479028 -n scheduled-stop-479028
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-479028 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-479028 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-479028 -n scheduled-stop-479028
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-479028
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-479028 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-479028
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-479028: exit status 7 (66.742282ms)

                                                
                                                
-- stdout --
	scheduled-stop-479028
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-479028 -n scheduled-stop-479028
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-479028 -n scheduled-stop-479028: exit status 7 (72.45019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-479028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-479028
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-479028: (4.337525972s)
--- PASS: TestScheduledStopUnix (105.74s)

                                                
                                    
x
+
TestInsufficientStorage (10.85s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-958045 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-958045 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.36133648s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cecc4431-8aa6-44d0-aadc-3812595b2257","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-958045] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cec6c0f6-755d-4fb0-84a1-70de2e84edc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19468"}}
	{"specversion":"1.0","id":"c734698f-e784-483c-bea4-c300a01c6cb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6f098bb8-022e-4926-bc7d-0b5e63b10321","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig"}}
	{"specversion":"1.0","id":"583127c4-eee5-4cf0-a28e-12683dc07f54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube"}}
	{"specversion":"1.0","id":"d6e488dd-d1a1-47ce-a5ae-3f2c2e6fe1fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7f138284-3d07-4181-b24d-0920e2cb5a7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"94bef1d0-b69b-4492-8058-52cdcd476315","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1361a5c4-8dea-4862-8359-c7e97a446d33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0469787a-1e99-41c3-bc4e-e6453657cf39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8c4b091-0989-4d9c-a7e8-378ed067f719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1f8d4e5a-f5df-4192-9384-5323bdfec239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-958045\" primary control-plane node in \"insufficient-storage-958045\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"90d7f388-a19b-4efa-996e-d075781eed8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1796589-434a-4b6a-8282-76958c52551b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b22b908-de7b-4d79-b095-1c67a7633233","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-958045 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-958045 --output=json --layout=cluster: exit status 7 (288.432062ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-958045","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-958045","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:48:37.847719  866726 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-958045" does not appear in /home/jenkins/minikube-integration/19468-713648/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-958045 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-958045 --output=json --layout=cluster: exit status 7 (293.092309ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-958045","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-958045","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:48:38.141344  866786 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-958045" does not appear in /home/jenkins/minikube-integration/19468-713648/kubeconfig
	E0819 19:48:38.152422  866786 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/insufficient-storage-958045/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-958045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-958045
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-958045: (1.908701798s)
--- PASS: TestInsufficientStorage (10.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2928199006 start -p running-upgrade-285336 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2928199006 start -p running-upgrade-285336 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.704719299s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-285336 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-285336 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.190378402s)
helpers_test.go:175: Cleaning up "running-upgrade-285336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-285336
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-285336: (2.575567081s)
--- PASS: TestRunningBinaryUpgrade (84.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-038136 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0819 19:53:59.815723  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:54:38.039997  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-038136 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.803067074s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-038136
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-038136: (1.260004131s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-038136 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-038136 status --format={{.Host}}: exit status 7 (84.343069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-038136 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-038136 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.835067395s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-038136 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-038136 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-038136 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (99.589164ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-038136] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-038136
	    minikube start -p kubernetes-upgrade-038136 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0381362 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-038136 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-038136 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-038136 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.594517792s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-038136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-038136
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-038136: (2.547943785s)
--- PASS: TestKubernetesUpgrade (349.32s)

                                                
                                    
x
+
TestMissingContainerUpgrade (172.37s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2298997444 start -p missing-upgrade-465415 --memory=2200 --driver=docker  --container-runtime=containerd
E0819 19:48:59.815579  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2298997444 start -p missing-upgrade-465415 --memory=2200 --driver=docker  --container-runtime=containerd: (1m27.946651382s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-465415
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-465415
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-465415 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-465415 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m19.11574557s)
helpers_test.go:175: Cleaning up "missing-upgrade-465415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-465415
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-465415: (2.878422826s)
--- PASS: TestMissingContainerUpgrade (172.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-525780 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-525780 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (81.684683ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-525780] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-525780 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-525780 --driver=docker  --container-runtime=containerd: (38.544911171s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-525780 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-525780 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-525780 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.802746986s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-525780 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-525780 status -o json: exit status 2 (314.984002ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-525780","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-525780
E0819 19:49:38.039981  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-525780: (1.915566648s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-525780 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-525780 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.299116414s)
--- PASS: TestNoKubernetes/serial/Start (6.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-525780 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-525780 "sudo systemctl is-active --quiet service kubelet": exit status 1 (388.008553ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-525780
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-525780: (1.270073887s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-525780 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-525780 --driver=docker  --container-runtime=containerd: (7.013248997s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-525780 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-525780 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.221492ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-375051 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-375051 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (296.37947ms)

                                                
                                                
-- stdout --
	* [false-375051] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:50:02.369406  876132 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:50:02.381774  876132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:50:02.381828  876132 out.go:358] Setting ErrFile to fd 2...
	I0819 19:50:02.381849  876132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:50:02.382181  876132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-713648/.minikube/bin
	I0819 19:50:02.382670  876132 out.go:352] Setting JSON to false
	I0819 19:50:02.383651  876132 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":12744,"bootTime":1724084259,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 19:50:02.383775  876132 start.go:139] virtualization:  
	I0819 19:50:02.387388  876132 out.go:177] * [false-375051] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 19:50:02.391608  876132 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:50:02.391685  876132 notify.go:220] Checking for updates...
	I0819 19:50:02.399078  876132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:50:02.402174  876132 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-713648/kubeconfig
	I0819 19:50:02.405187  876132 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-713648/.minikube
	I0819 19:50:02.408227  876132 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 19:50:02.411631  876132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:50:02.415010  876132 config.go:182] Loaded profile config "missing-upgrade-465415": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0819 19:50:02.415112  876132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:50:02.456327  876132 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 19:50:02.456500  876132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 19:50:02.564415  876132 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 19:50:02.543478525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 19:50:02.564528  876132 docker.go:307] overlay module found
	I0819 19:50:02.567477  876132 out.go:177] * Using the docker driver based on user configuration
	I0819 19:50:02.570062  876132 start.go:297] selected driver: docker
	I0819 19:50:02.570087  876132 start.go:901] validating driver "docker" against <nil>
	I0819 19:50:02.570122  876132 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:50:02.573175  876132 out.go:201] 
	W0819 19:50:02.575919  876132 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0819 19:50:02.578816  876132 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-375051 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-375051" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-375051

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-375051"

                                                
                                                
----------------------- debugLogs end: false-375051 [took: 4.270210644s] --------------------------------
helpers_test.go:175: Cleaning up "false-375051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-375051
--- PASS: TestNetworkPlugins/group/false (4.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3568865761 start -p stopped-upgrade-906858 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3568865761 start -p stopped-upgrade-906858 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.161922046s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3568865761 -p stopped-upgrade-906858 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3568865761 -p stopped-upgrade-906858 stop: (19.935239637s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-906858 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-906858 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.100393296s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-906858
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-906858: (1.08011165s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestPause/serial/Start (62.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-524579 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0819 19:57:41.105776  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-524579 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m2.223242645s)
--- PASS: TestPause/serial/Start (62.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.47s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-524579 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-524579 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.435461747s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.47s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-524579 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-524579 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-524579 --output=json --layout=cluster: exit status 2 (323.7995ms)

                                                
                                                
-- stdout --
	{"Name":"pause-524579","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-524579","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-524579 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-524579 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.63s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-524579 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-524579 --alsologtostderr -v=5: (2.629938365s)
--- PASS: TestPause/serial/DeletePaused (2.63s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-524579
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-524579: exit status 1 (16.894274ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-524579: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0819 19:58:59.815577  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (50.583396402s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-375051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-375051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nx56n" [3e5b001d-f110-475d-9861-edd6a0638fe4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nx56n" [3e5b001d-f110-475d-9861-edd6a0638fe4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004397777s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-375051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0819 19:59:38.039619  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (58.886988745s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m9.431558281s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wn266" [0f74bf32-58b0-4b63-bb99-9bb34209b277] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003657155s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-375051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-375051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b6pnz" [d7858daf-8ffc-433a-9372-9658bb050f99] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b6pnz" [d7858daf-8ffc-433a-9372-9658bb050f99] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00409355s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-375051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kvhbs" [79dedeae-286d-430b-a5d3-2e4b16ca3f87] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007706419s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-375051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-375051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rt75t" [add3d907-fc45-4050-9b4f-43d4c78e5735] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rt75t" [add3d907-fc45-4050-9b4f-43d4c78e5735] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005276761s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m0.503373794s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-375051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0819 20:02:02.882887  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m13.986187922s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-375051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-375051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-blr6z" [ae208c9c-016a-421c-aaa0-9858eb7db399] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-blr6z" [ae208c9c-016a-421c-aaa0-9858eb7db399] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003331422s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-375051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (51.590377187s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-375051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-375051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6cxp9" [747b5f16-3ec3-435e-8f69-7a7c286861f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6cxp9" [747b5f16-3ec3-435e-8f69-7a7c286861f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004511768s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-375051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-375051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m17.513932537s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-h4bq4" [12c07074-6d44-419e-9c5d-fb7f32fc2c0a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.013630042s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-375051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-375051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-crw2v" [0cd12cb5-9dcb-4a9f-9309-65c49188eb4d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-crw2v" [0cd12cb5-9dcb-4a9f-9309-65c49188eb4d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.006133107s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-375051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-161772 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0819 20:04:21.309535  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/auto-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:04:26.431469  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/auto-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:04:36.672980  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/auto-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:04:38.039944  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-161772 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m15.085688553s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-375051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-375051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sml7r" [8384b248-4fd5-4aeb-9fdd-3b2ba7708445] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 20:04:57.155016  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/auto-375051/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-sml7r" [8384b248-4fd5-4aeb-9fdd-3b2ba7708445] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004440121s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-375051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-375051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)
E0819 20:18:59.815595  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:19:16.174275  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/auto-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:19:18.930372  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-603631 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 20:05:32.614375  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:32.620843  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:32.632340  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:32.653846  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:32.695269  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:32.776773  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:32.938369  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:33.260525  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:33.902459  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:35.184172  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:37.746340  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:38.116533  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/auto-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:42.868581  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:05:53.110761  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:01.589449  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:01.596731  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:01.608046  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:01.629386  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:01.671099  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:01.753155  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:01.914574  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:02.235974  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:02.877674  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:04.159000  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:06.720849  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:11.842389  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:13.592711  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:06:22.084828  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-603631 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m12.040324122s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-161772 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b8ad8b39-bf29-4449-abbd-e84516266c1a] Pending
helpers_test.go:344: "busybox" [b8ad8b39-bf29-4449-abbd-e84516266c1a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b8ad8b39-bf29-4449-abbd-e84516266c1a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004135675s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-161772 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-603631 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8333038d-5ecc-490c-903c-b74a36941b8a] Pending
helpers_test.go:344: "busybox" [8333038d-5ecc-490c-903c-b74a36941b8a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0819 20:06:42.566773  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [8333038d-5ecc-490c-903c-b74a36941b8a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00453193s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-603631 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-161772 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-161772 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.177878573s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-161772 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-161772 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-161772 --alsologtostderr -v=3: (12.106273056s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-603631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-603631 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-603631 --alsologtostderr -v=3
E0819 20:06:54.554791  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-603631 --alsologtostderr -v=3: (12.085337396s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-161772 -n old-k8s-version-161772
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-161772 -n old-k8s-version-161772: exit status 7 (70.574664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-161772 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (308.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-161772 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0819 20:07:00.038570  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/auto-375051/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-161772 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (5m8.564024059s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-161772 -n old-k8s-version-161772
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (308.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-603631 -n no-preload-603631
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-603631 -n no-preload-603631: exit status 7 (79.52079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-603631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (302.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-603631 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 20:07:13.665731  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:13.672439  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:13.683757  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:13.705131  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:13.746697  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:13.828318  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:13.989913  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:14.312026  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:14.953346  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:16.234969  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:18.796262  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:23.528901  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:23.917577  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:34.159426  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:07:54.641389  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:01.046294  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:01.052802  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:01.064257  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:01.086259  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:01.127689  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:01.209248  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:01.370933  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:01.693055  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:02.334356  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:03.616649  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:06.178518  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:11.300261  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:16.477099  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:21.542493  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:35.603141  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:39.010502  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:39.017117  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:39.028592  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:39.050301  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:39.091845  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:39.173309  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:39.334791  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:39.656415  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:40.298685  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:41.580529  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:42.024256  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:44.141893  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:45.450989  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:49.263931  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:59.505383  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:08:59.815142  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:16.173531  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/auto-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:19.987034  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:22.985940  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:38.039233  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:43.880749  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/auto-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:53.114394  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:53.120857  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:53.132370  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:53.153890  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:53.195352  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:53.276895  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:53.438398  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:53.760254  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:54.402565  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:55.684139  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:57.525267  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:09:58.246315  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:10:00.948607  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:10:03.368160  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:10:13.610087  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:10:32.615308  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:10:34.092201  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:10:44.908224  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:11:00.319323  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:11:01.589127  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:11:15.057454  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:11:22.870869  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:11:29.293166  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-603631 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (5m2.038510694s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-603631 -n no-preload-603631
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (302.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-76qv8" [34afb18a-0666-417f-9c54-7e5fedb0ae8b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004155468s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zhggw" [0f5cba35-d87c-4713-8e58-6161e39e2d2b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005150432s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-76qv8" [34afb18a-0666-417f-9c54-7e5fedb0ae8b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00438982s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-603631 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zhggw" [0f5cba35-d87c-4713-8e58-6161e39e2d2b] Running
E0819 20:12:13.666043  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014813425s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-161772 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-603631 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-603631 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-603631 -n no-preload-603631
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-603631 -n no-preload-603631: exit status 2 (449.249302ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-603631 -n no-preload-603631
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-603631 -n no-preload-603631: exit status 2 (429.972288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-603631 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-603631 -n no-preload-603631
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-603631 -n no-preload-603631
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-161772 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-161772 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-161772 --alsologtostderr -v=1: (1.118903044s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-161772 -n old-k8s-version-161772
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-161772 -n old-k8s-version-161772: exit status 2 (633.703852ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-161772 -n old-k8s-version-161772
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-161772 -n old-k8s-version-161772: exit status 2 (451.450557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-161772 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-161772 --alsologtostderr -v=1: (1.311274897s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-161772 -n old-k8s-version-161772
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-161772 -n old-k8s-version-161772
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-588863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-588863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m10.161950718s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-692104 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 20:12:36.979371  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:12:41.367015  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:13:01.045714  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:13:28.750698  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-692104 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m14.534773616s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-588863 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d45ef320-05e7-42dc-90a9-2335aa3d5768] Pending
helpers_test.go:344: "busybox" [d45ef320-05e7-42dc-90a9-2335aa3d5768] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d45ef320-05e7-42dc-90a9-2335aa3d5768] Running
E0819 20:13:39.011305  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004090245s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-588863 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-692104 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e4cc5bb9-65b5-485f-8c99-cab6765f601d] Pending
helpers_test.go:344: "busybox" [e4cc5bb9-65b5-485f-8c99-cab6765f601d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e4cc5bb9-65b5-485f-8c99-cab6765f601d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004025835s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-692104 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-588863 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-588863 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.200351006s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-588863 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-588863 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-588863 --alsologtostderr -v=3: (12.186584506s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-692104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-692104 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-692104 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-692104 --alsologtostderr -v=3: (12.090008261s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-588863 -n embed-certs-588863
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-588863 -n embed-certs-588863: exit status 7 (64.667419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-588863 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-588863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 20:13:59.815251  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-588863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m28.099002255s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-588863 -n embed-certs-588863
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-692104 -n default-k8s-diff-port-692104
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-692104 -n default-k8s-diff-port-692104: exit status 7 (124.010722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-692104 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-692104 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 20:14:06.712279  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:14:16.173817  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/auto-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:14:21.107338  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:14:38.039643  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:14:53.114724  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:15:20.821484  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/bridge-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:15:32.614565  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/kindnet-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:01.589309  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/calico-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:35.069266  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:35.075656  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:35.087159  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:35.108580  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:35.150048  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:35.231513  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:35.393016  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:35.714690  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:36.356633  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:37.638921  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:40.201024  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:41.444907  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:41.452350  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:41.463839  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:41.485263  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:41.526888  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:41.608458  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:41.769839  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:42.091728  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:42.733790  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:44.015752  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:45.322512  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:46.578218  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:51.700119  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:16:55.564578  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:17:01.942111  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:17:13.665518  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/custom-flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:17:16.046854  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:17:22.423690  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:17:57.008520  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/old-k8s-version-161772/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:18:01.045668  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/enable-default-cni-375051/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:18:03.385203  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-692104 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m32.136448852s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-692104 -n default-k8s-diff-port-692104
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lkfjv" [d9bb8c11-0800-469b-b170-48a7fcc05531] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003642318s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lkfjv" [d9bb8c11-0800-469b-b170-48a7fcc05531] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004687831s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-588863 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-588863 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-588863 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-588863 -n embed-certs-588863
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-588863 -n embed-certs-588863: exit status 2 (347.155699ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-588863 -n embed-certs-588863
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-588863 -n embed-certs-588863: exit status 2 (322.028903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-588863 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-588863 -n embed-certs-588863
E0819 20:18:39.011128  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/flannel-375051/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-588863 -n embed-certs-588863
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vrwl7" [0174f4fe-88f2-4400-a41c-175c4eadc766] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004029908s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vrwl7" [0174f4fe-88f2-4400-a41c-175c4eadc766] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.029922368s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-692104 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-888483 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 20:18:42.884172  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/functional-559559/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-888483 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (41.224662454s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-692104 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-692104 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-692104 --alsologtostderr -v=1: (1.351403918s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-692104 -n default-k8s-diff-port-692104
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-692104 -n default-k8s-diff-port-692104: exit status 2 (585.557026ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-692104 -n default-k8s-diff-port-692104
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-692104 -n default-k8s-diff-port-692104: exit status 2 (403.104837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-692104 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-692104 -n default-k8s-diff-port-692104
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-692104 -n default-k8s-diff-port-692104
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-888483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-888483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.16826836s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-888483 --alsologtostderr -v=3
E0819 20:19:25.306827  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/no-preload-603631/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-888483 --alsologtostderr -v=3: (1.237581972s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-888483 -n newest-cni-888483
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-888483 -n newest-cni-888483: exit status 7 (80.246372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-888483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-888483 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 20:19:38.039932  719052 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/addons-764717/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-888483 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (15.282954009s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-888483 -n newest-cni-888483
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-888483 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-888483 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-888483 -n newest-cni-888483
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-888483 -n newest-cni-888483: exit status 2 (302.826664ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-888483 -n newest-cni-888483
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-888483 -n newest-cni-888483: exit status 2 (312.309228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-888483 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-888483 -n newest-cni-888483
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-888483 -n newest-cni-888483
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.17s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-715450 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-715450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-715450
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-375051 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-375051" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-375051

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-375051"

                                                
                                                
----------------------- debugLogs end: kubenet-375051 [took: 5.393168246s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-375051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-375051
--- SKIP: TestNetworkPlugins/group/kubenet (5.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-375051 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-375051" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19468-713648/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 19:50:07 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-465415
contexts:
- context:
cluster: missing-upgrade-465415
extensions:
- extension:
last-update: Mon, 19 Aug 2024 19:50:07 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-465415
name: missing-upgrade-465415
current-context: missing-upgrade-465415
kind: Config
preferences: {}
users:
- name: missing-upgrade-465415
user:
client-certificate: /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/missing-upgrade-465415/client.crt
client-key: /home/jenkins/minikube-integration/19468-713648/.minikube/profiles/missing-upgrade-465415/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-375051

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-375051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-375051"

                                                
                                                
----------------------- debugLogs end: cilium-375051 [took: 4.841902196s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-375051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-375051
--- SKIP: TestNetworkPlugins/group/cilium (4.99s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-486502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-486502
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard