Test Report: Docker_Linux_containerd_arm64 19584

                    
                      9f2af3711cc698027f451721692d4ad7c6bf425f:2024-09-09:36138
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 200.26
111 TestFunctional/parallel/License 0.23
x
+
TestAddons/serial/Volcano (200.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 54.14946ms
addons_test.go:897: volcano-scheduler stabilized in 55.36699ms
addons_test.go:905: volcano-admission stabilized in 55.782784ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-tn2vq" [3bd2a4dc-d1e6-45c3-b261-e9bc89f21c0d] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003709391s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-zbqv2" [7cc58b74-b0d7-4b94-b2c3-4638247a41c9] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004476888s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-zg8pg" [b55bd087-c4e6-428d-905e-b727d242fbf2] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004254037s
addons_test.go:932: (dbg) Run:  kubectl --context addons-630724 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-630724 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-630724 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [500faad8-af2e-44b1-96d1-7e476b971174] Pending
helpers_test.go:344: "test-job-nginx-0" [500faad8-af2e-44b1-96d1-7e476b971174] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-630724 -n addons-630724
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-09 11:49:00.687284917 +0000 UTC m=+430.896797380
addons_test.go:964: (dbg) Run:  kubectl --context addons-630724 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-630724 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-78115208-2aa9-47be-abfe-2b71bb87f399
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tjfmt (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-tjfmt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-630724 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-630724 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-630724
helpers_test.go:235: (dbg) docker inspect addons-630724:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ffb39066bf457a915c79fe74d19fc4f6c9eadb082c1a7b3da50c45a71441d501",
	        "Created": "2024-09-09T11:42:30.901786712Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-09T11:42:31.074473704Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8411aacd61cb8f2a7ae48c92e2c9e76ad4dd701b3dba8b30393c5cc31fbd2b15",
	        "ResolvConfPath": "/var/lib/docker/containers/ffb39066bf457a915c79fe74d19fc4f6c9eadb082c1a7b3da50c45a71441d501/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ffb39066bf457a915c79fe74d19fc4f6c9eadb082c1a7b3da50c45a71441d501/hostname",
	        "HostsPath": "/var/lib/docker/containers/ffb39066bf457a915c79fe74d19fc4f6c9eadb082c1a7b3da50c45a71441d501/hosts",
	        "LogPath": "/var/lib/docker/containers/ffb39066bf457a915c79fe74d19fc4f6c9eadb082c1a7b3da50c45a71441d501/ffb39066bf457a915c79fe74d19fc4f6c9eadb082c1a7b3da50c45a71441d501-json.log",
	        "Name": "/addons-630724",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-630724:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-630724",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8c275e2cd3941a29da9f4c3c61bc106fde134dc4d5f48b00458a95a4c6de165a-init/diff:/var/lib/docker/overlay2/814fb589cbf56b8fd633abdb6968243e97b9bab300625366d76f729b25701844/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8c275e2cd3941a29da9f4c3c61bc106fde134dc4d5f48b00458a95a4c6de165a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8c275e2cd3941a29da9f4c3c61bc106fde134dc4d5f48b00458a95a4c6de165a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8c275e2cd3941a29da9f4c3c61bc106fde134dc4d5f48b00458a95a4c6de165a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-630724",
	                "Source": "/var/lib/docker/volumes/addons-630724/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-630724",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-630724",
	                "name.minikube.sigs.k8s.io": "addons-630724",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5afbe3ec08842490584039ca414019fc08160a9c478117a6ec489a629073df9c",
	            "SandboxKey": "/var/run/docker/netns/5afbe3ec0884",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-630724": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0b27d3b67e297d20eeee2b9a9a5586c7d860c25c1b815d5e6d66b8d1b1e155ba",
	                    "EndpointID": "7e274e76f03765cf8635cc90e2c2a03ffec56f9ca339b66fbeefe6ce4fd9d975",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-630724",
	                        "ffb39066bf45"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-630724 -n addons-630724
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-630724 logs -n 25: (1.619353142s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-413866   | jenkins | v1.34.0 | 09 Sep 24 11:41 UTC |                     |
	|         | -p download-only-413866              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 09 Sep 24 11:41 UTC | 09 Sep 24 11:41 UTC |
	| delete  | -p download-only-413866              | download-only-413866   | jenkins | v1.34.0 | 09 Sep 24 11:41 UTC | 09 Sep 24 11:41 UTC |
	| start   | -o=json --download-only              | download-only-847638   | jenkins | v1.34.0 | 09 Sep 24 11:41 UTC |                     |
	|         | -p download-only-847638              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC | 09 Sep 24 11:42 UTC |
	| delete  | -p download-only-847638              | download-only-847638   | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC | 09 Sep 24 11:42 UTC |
	| delete  | -p download-only-413866              | download-only-413866   | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC | 09 Sep 24 11:42 UTC |
	| delete  | -p download-only-847638              | download-only-847638   | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC | 09 Sep 24 11:42 UTC |
	| start   | --download-only -p                   | download-docker-455370 | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC |                     |
	|         | download-docker-455370               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-455370            | download-docker-455370 | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC | 09 Sep 24 11:42 UTC |
	| start   | --download-only -p                   | binary-mirror-053699   | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC |                     |
	|         | binary-mirror-053699                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41591               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-053699              | binary-mirror-053699   | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC | 09 Sep 24 11:42 UTC |
	| addons  | enable dashboard -p                  | addons-630724          | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC |                     |
	|         | addons-630724                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-630724          | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC |                     |
	|         | addons-630724                        |                        |         |         |                     |                     |
	| start   | -p addons-630724 --wait=true         | addons-630724          | jenkins | v1.34.0 | 09 Sep 24 11:42 UTC | 09 Sep 24 11:45 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/09 11:42:05
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0909 11:42:05.766034  299508 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:42:05.766210  299508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:42:05.766238  299508 out.go:358] Setting ErrFile to fd 2...
	I0909 11:42:05.766257  299508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:42:05.766515  299508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
	I0909 11:42:05.767015  299508 out.go:352] Setting JSON to false
	I0909 11:42:05.767895  299508 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5064,"bootTime":1725877062,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0909 11:42:05.767991  299508 start.go:139] virtualization:  
	I0909 11:42:05.771252  299508 out.go:177] * [addons-630724] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0909 11:42:05.774244  299508 out.go:177]   - MINIKUBE_LOCATION=19584
	I0909 11:42:05.774385  299508 notify.go:220] Checking for updates...
	I0909 11:42:05.779500  299508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 11:42:05.782675  299508 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	I0909 11:42:05.786047  299508 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	I0909 11:42:05.789322  299508 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0909 11:42:05.792081  299508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0909 11:42:05.794866  299508 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 11:42:05.818773  299508 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 11:42:05.818951  299508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:42:05.871606  299508 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-09 11:42:05.862180964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 11:42:05.871718  299508 docker.go:307] overlay module found
	I0909 11:42:05.874914  299508 out.go:177] * Using the docker driver based on user configuration
	I0909 11:42:05.877686  299508 start.go:297] selected driver: docker
	I0909 11:42:05.877704  299508 start.go:901] validating driver "docker" against <nil>
	I0909 11:42:05.877718  299508 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0909 11:42:05.878355  299508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:42:05.935780  299508 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-09 11:42:05.926673029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 11:42:05.935954  299508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0909 11:42:05.936197  299508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0909 11:42:05.939106  299508 out.go:177] * Using Docker driver with root privileges
	I0909 11:42:05.941451  299508 cni.go:84] Creating CNI manager for ""
	I0909 11:42:05.941484  299508 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0909 11:42:05.941498  299508 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0909 11:42:05.941587  299508 start.go:340] cluster config:
	{Name:addons-630724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-630724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I0909 11:42:05.946607  299508 out.go:177] * Starting "addons-630724" primary control-plane node in "addons-630724" cluster
	I0909 11:42:05.948954  299508 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0909 11:42:05.951482  299508 out.go:177] * Pulling base image v0.0.45 ...
	I0909 11:42:05.953984  299508 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0909 11:42:05.954041  299508 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19584-293351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0909 11:42:05.954054  299508 cache.go:56] Caching tarball of preloaded images
	I0909 11:42:05.954087  299508 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0909 11:42:05.954148  299508 preload.go:172] Found /home/jenkins/minikube-integration/19584-293351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0909 11:42:05.954159  299508 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0909 11:42:05.954534  299508 profile.go:143] Saving config to /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/config.json ...
	I0909 11:42:05.954559  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/config.json: {Name:mkc8150b164886927a10ebe7d052ba51fed06395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:05.969430  299508 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0909 11:42:05.969563  299508 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0909 11:42:05.969588  299508 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0909 11:42:05.969594  299508 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0909 11:42:05.969602  299508 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0909 11:42:05.969615  299508 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache
	I0909 11:42:23.105251  299508 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball
	I0909 11:42:23.105296  299508 cache.go:194] Successfully downloaded all kic artifacts
	I0909 11:42:23.105361  299508 start.go:360] acquireMachinesLock for addons-630724: {Name:mk78d6c6d8207ca26f71a24085d773e3b39cb19f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0909 11:42:23.106044  299508 start.go:364] duration metric: took 652.845µs to acquireMachinesLock for "addons-630724"
	I0909 11:42:23.106104  299508 start.go:93] Provisioning new machine with config: &{Name:addons-630724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-630724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0909 11:42:23.106190  299508 start.go:125] createHost starting for "" (driver="docker")
	I0909 11:42:23.109431  299508 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0909 11:42:23.109706  299508 start.go:159] libmachine.API.Create for "addons-630724" (driver="docker")
	I0909 11:42:23.109744  299508 client.go:168] LocalClient.Create starting
	I0909 11:42:23.109855  299508 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19584-293351/.minikube/certs/ca.pem
	I0909 11:42:23.749363  299508 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19584-293351/.minikube/certs/cert.pem
	I0909 11:42:24.337395  299508 cli_runner.go:164] Run: docker network inspect addons-630724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0909 11:42:24.351591  299508 cli_runner.go:211] docker network inspect addons-630724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0909 11:42:24.351689  299508 network_create.go:284] running [docker network inspect addons-630724] to gather additional debugging logs...
	I0909 11:42:24.351714  299508 cli_runner.go:164] Run: docker network inspect addons-630724
	W0909 11:42:24.367303  299508 cli_runner.go:211] docker network inspect addons-630724 returned with exit code 1
	I0909 11:42:24.367345  299508 network_create.go:287] error running [docker network inspect addons-630724]: docker network inspect addons-630724: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-630724 not found
	I0909 11:42:24.367361  299508 network_create.go:289] output of [docker network inspect addons-630724]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-630724 not found
	
	** /stderr **
	I0909 11:42:24.367536  299508 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0909 11:42:24.381403  299508 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017b0870}
	I0909 11:42:24.381450  299508 network_create.go:124] attempt to create docker network addons-630724 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0909 11:42:24.381517  299508 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-630724 addons-630724
	I0909 11:42:24.454255  299508 network_create.go:108] docker network addons-630724 192.168.49.0/24 created
	I0909 11:42:24.454290  299508 kic.go:121] calculated static IP "192.168.49.2" for the "addons-630724" container
	I0909 11:42:24.454368  299508 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0909 11:42:24.468677  299508 cli_runner.go:164] Run: docker volume create addons-630724 --label name.minikube.sigs.k8s.io=addons-630724 --label created_by.minikube.sigs.k8s.io=true
	I0909 11:42:24.487305  299508 oci.go:103] Successfully created a docker volume addons-630724
	I0909 11:42:24.487397  299508 cli_runner.go:164] Run: docker run --rm --name addons-630724-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630724 --entrypoint /usr/bin/test -v addons-630724:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib
	I0909 11:42:26.610009  299508 cli_runner.go:217] Completed: docker run --rm --name addons-630724-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630724 --entrypoint /usr/bin/test -v addons-630724:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib: (2.122564764s)
	I0909 11:42:26.610039  299508 oci.go:107] Successfully prepared a docker volume addons-630724
	I0909 11:42:26.610069  299508 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0909 11:42:26.610094  299508 kic.go:194] Starting extracting preloaded images to volume ...
	I0909 11:42:26.610179  299508 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19584-293351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-630724:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir
	I0909 11:42:30.825212  299508 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19584-293351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-630724:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir: (4.214988058s)
	I0909 11:42:30.825251  299508 kic.go:203] duration metric: took 4.215153316s to extract preloaded images to volume ...
	W0909 11:42:30.825416  299508 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0909 11:42:30.825529  299508 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0909 11:42:30.884513  299508 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-630724 --name addons-630724 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630724 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-630724 --network addons-630724 --ip 192.168.49.2 --volume addons-630724:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85
	I0909 11:42:31.261118  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Running}}
	I0909 11:42:31.288605  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:42:31.317193  299508 cli_runner.go:164] Run: docker exec addons-630724 stat /var/lib/dpkg/alternatives/iptables
	I0909 11:42:31.395671  299508 oci.go:144] the created container "addons-630724" has a running status.
	I0909 11:42:31.395701  299508 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa...
	I0909 11:42:32.740980  299508 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0909 11:42:32.761249  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:42:32.777952  299508 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0909 11:42:32.777976  299508 kic_runner.go:114] Args: [docker exec --privileged addons-630724 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0909 11:42:32.836150  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:42:32.852718  299508 machine.go:93] provisionDockerMachine start ...
	I0909 11:42:32.852834  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:42:32.868767  299508 main.go:141] libmachine: Using SSH client type: native
	I0909 11:42:32.869046  299508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0909 11:42:32.869061  299508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0909 11:42:32.989033  299508 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630724
	
	I0909 11:42:32.989058  299508 ubuntu.go:169] provisioning hostname "addons-630724"
	I0909 11:42:32.989138  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:42:33.006001  299508 main.go:141] libmachine: Using SSH client type: native
	I0909 11:42:33.006264  299508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0909 11:42:33.006282  299508 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-630724 && echo "addons-630724" | sudo tee /etc/hostname
	I0909 11:42:33.162553  299508 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630724
	
	I0909 11:42:33.162664  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:42:33.180036  299508 main.go:141] libmachine: Using SSH client type: native
	I0909 11:42:33.180311  299508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0909 11:42:33.180335  299508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-630724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-630724/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-630724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0909 11:42:33.301667  299508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0909 11:42:33.301699  299508 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19584-293351/.minikube CaCertPath:/home/jenkins/minikube-integration/19584-293351/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19584-293351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19584-293351/.minikube}
	I0909 11:42:33.301775  299508 ubuntu.go:177] setting up certificates
	I0909 11:42:33.301785  299508 provision.go:84] configureAuth start
	I0909 11:42:33.301870  299508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630724
	I0909 11:42:33.319873  299508 provision.go:143] copyHostCerts
	I0909 11:42:33.319998  299508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19584-293351/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19584-293351/.minikube/ca.pem (1078 bytes)
	I0909 11:42:33.320161  299508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19584-293351/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19584-293351/.minikube/cert.pem (1123 bytes)
	I0909 11:42:33.320336  299508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19584-293351/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19584-293351/.minikube/key.pem (1679 bytes)
	I0909 11:42:33.320413  299508 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19584-293351/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19584-293351/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19584-293351/.minikube/certs/ca-key.pem org=jenkins.addons-630724 san=[127.0.0.1 192.168.49.2 addons-630724 localhost minikube]
	I0909 11:42:33.735772  299508 provision.go:177] copyRemoteCerts
	I0909 11:42:33.735853  299508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0909 11:42:33.735900  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:42:33.754458  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:42:33.843635  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0909 11:42:33.869320  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0909 11:42:33.895157  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0909 11:42:33.921236  299508 provision.go:87] duration metric: took 619.429638ms to configureAuth
	I0909 11:42:33.921263  299508 ubuntu.go:193] setting minikube options for container-runtime
	I0909 11:42:33.921467  299508 config.go:182] Loaded profile config "addons-630724": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0909 11:42:33.921480  299508 machine.go:96] duration metric: took 1.068737418s to provisionDockerMachine
	I0909 11:42:33.921487  299508 client.go:171] duration metric: took 10.811738013s to LocalClient.Create
	I0909 11:42:33.921504  299508 start.go:167] duration metric: took 10.811800298s to libmachine.API.Create "addons-630724"
	I0909 11:42:33.921512  299508 start.go:293] postStartSetup for "addons-630724" (driver="docker")
	I0909 11:42:33.921521  299508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0909 11:42:33.921578  299508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0909 11:42:33.921617  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:42:33.940087  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:42:34.037216  299508 ssh_runner.go:195] Run: cat /etc/os-release
	I0909 11:42:34.041747  299508 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0909 11:42:34.041784  299508 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0909 11:42:34.041800  299508 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0909 11:42:34.041807  299508 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0909 11:42:34.041819  299508 filesync.go:126] Scanning /home/jenkins/minikube-integration/19584-293351/.minikube/addons for local assets ...
	I0909 11:42:34.041895  299508 filesync.go:126] Scanning /home/jenkins/minikube-integration/19584-293351/.minikube/files for local assets ...
	I0909 11:42:34.041934  299508 start.go:296] duration metric: took 120.415868ms for postStartSetup
	I0909 11:42:34.044305  299508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630724
	I0909 11:42:34.068333  299508 profile.go:143] Saving config to /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/config.json ...
	I0909 11:42:34.068638  299508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 11:42:34.068692  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:42:34.085909  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:42:34.170832  299508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0909 11:42:34.175867  299508 start.go:128] duration metric: took 11.069659214s to createHost
	I0909 11:42:34.175892  299508 start.go:83] releasing machines lock for "addons-630724", held for 11.069829388s
	I0909 11:42:34.175966  299508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630724
	I0909 11:42:34.192895  299508 ssh_runner.go:195] Run: cat /version.json
	I0909 11:42:34.192930  299508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0909 11:42:34.192959  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:42:34.192971  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:42:34.218238  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:42:34.223559  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:42:34.430089  299508 ssh_runner.go:195] Run: systemctl --version
	I0909 11:42:34.434570  299508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0909 11:42:34.438901  299508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0909 11:42:34.465269  299508 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0909 11:42:34.465393  299508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0909 11:42:34.496110  299508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0909 11:42:34.496132  299508 start.go:495] detecting cgroup driver to use...
	I0909 11:42:34.496186  299508 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0909 11:42:34.496261  299508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0909 11:42:34.509194  299508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0909 11:42:34.521088  299508 docker.go:217] disabling cri-docker service (if available) ...
	I0909 11:42:34.521205  299508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0909 11:42:34.535522  299508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0909 11:42:34.550129  299508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0909 11:42:34.642356  299508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0909 11:42:34.738006  299508 docker.go:233] disabling docker service ...
	I0909 11:42:34.738081  299508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0909 11:42:34.758158  299508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0909 11:42:34.770300  299508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0909 11:42:34.854658  299508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0909 11:42:34.946973  299508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0909 11:42:34.958557  299508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0909 11:42:34.977076  299508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0909 11:42:34.988095  299508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0909 11:42:35.000102  299508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0909 11:42:35.000183  299508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0909 11:42:35.010796  299508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0909 11:42:35.022199  299508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0909 11:42:35.036342  299508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0909 11:42:35.052614  299508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0909 11:42:35.066665  299508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0909 11:42:35.080698  299508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0909 11:42:35.092931  299508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0909 11:42:35.105493  299508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0909 11:42:35.115261  299508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0909 11:42:35.125938  299508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0909 11:42:35.218936  299508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0909 11:42:35.346024  299508 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0909 11:42:35.346136  299508 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0909 11:42:35.349699  299508 start.go:563] Will wait 60s for crictl version
	I0909 11:42:35.349792  299508 ssh_runner.go:195] Run: which crictl
	I0909 11:42:35.353239  299508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0909 11:42:35.388496  299508 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.21
	RuntimeApiVersion:  v1
	I0909 11:42:35.388607  299508 ssh_runner.go:195] Run: containerd --version
	I0909 11:42:35.411597  299508 ssh_runner.go:195] Run: containerd --version
	I0909 11:42:35.439373  299508 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.21 ...
	I0909 11:42:35.441744  299508 cli_runner.go:164] Run: docker network inspect addons-630724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0909 11:42:35.457565  299508 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0909 11:42:35.461280  299508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0909 11:42:35.472484  299508 kubeadm.go:883] updating cluster {Name:addons-630724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-630724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0909 11:42:35.472609  299508 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0909 11:42:35.472672  299508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0909 11:42:35.511498  299508 containerd.go:627] all images are preloaded for containerd runtime.
	I0909 11:42:35.511523  299508 containerd.go:534] Images already preloaded, skipping extraction
	I0909 11:42:35.511586  299508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0909 11:42:35.548770  299508 containerd.go:627] all images are preloaded for containerd runtime.
	I0909 11:42:35.548794  299508 cache_images.go:84] Images are preloaded, skipping loading
	I0909 11:42:35.548803  299508 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0909 11:42:35.548912  299508 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-630724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-630724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0909 11:42:35.548987  299508 ssh_runner.go:195] Run: sudo crictl info
	I0909 11:42:35.586203  299508 cni.go:84] Creating CNI manager for ""
	I0909 11:42:35.586227  299508 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0909 11:42:35.586237  299508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0909 11:42:35.586287  299508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-630724 NodeName:addons-630724 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0909 11:42:35.586454  299508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-630724"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0909 11:42:35.586533  299508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0909 11:42:35.595890  299508 binaries.go:44] Found k8s binaries, skipping transfer
	I0909 11:42:35.596007  299508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0909 11:42:35.605622  299508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0909 11:42:35.624809  299508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0909 11:42:35.644607  299508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0909 11:42:35.663698  299508 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0909 11:42:35.667507  299508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0909 11:42:35.678754  299508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0909 11:42:35.757455  299508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0909 11:42:35.774295  299508 certs.go:68] Setting up /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724 for IP: 192.168.49.2
	I0909 11:42:35.774319  299508 certs.go:194] generating shared ca certs ...
	I0909 11:42:35.774336  299508 certs.go:226] acquiring lock for ca certs: {Name:mk64d965bac1786dcbeb0e8f5c0f634bb13cc423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:35.774986  299508 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19584-293351/.minikube/ca.key
	I0909 11:42:36.317340  299508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19584-293351/.minikube/ca.crt ...
	I0909 11:42:36.317373  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/ca.crt: {Name:mkf9a433af54dfff56e580a413ccd0c375984f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:36.318081  299508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19584-293351/.minikube/ca.key ...
	I0909 11:42:36.318102  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/ca.key: {Name:mkafecebf80a44c8a45269c88abf3f40c10fd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:36.318618  299508 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19584-293351/.minikube/proxy-client-ca.key
	I0909 11:42:36.474407  299508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19584-293351/.minikube/proxy-client-ca.crt ...
	I0909 11:42:36.474439  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/proxy-client-ca.crt: {Name:mk8153a3108267b87d3c5699ff2c17cf455d9395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:36.474617  299508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19584-293351/.minikube/proxy-client-ca.key ...
	I0909 11:42:36.474632  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/proxy-client-ca.key: {Name:mka3b55ff605ab68ba3630eec767fe2ec5145ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:36.474711  299508 certs.go:256] generating profile certs ...
	I0909 11:42:36.474778  299508 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.key
	I0909 11:42:36.474795  299508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt with IP's: []
	I0909 11:42:36.767309  299508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt ...
	I0909 11:42:36.767342  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: {Name:mk1328f11c7d94ea703f40fa72a1003fa1699b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:36.767533  299508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.key ...
	I0909 11:42:36.767545  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.key: {Name:mk600396ffa4ae140d6d5d7d95498f20d073bd0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:36.768152  299508 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.key.4dc5a15b
	I0909 11:42:36.768178  299508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.crt.4dc5a15b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0909 11:42:37.147472  299508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.crt.4dc5a15b ...
	I0909 11:42:37.147506  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.crt.4dc5a15b: {Name:mk2b6bd1a9788702ef050bdf3deaf2e64a62a8b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:37.147701  299508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.key.4dc5a15b ...
	I0909 11:42:37.147718  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.key.4dc5a15b: {Name:mk3e1e8d0b0f9af6dc7861b24c02faec4f7efe60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:37.147827  299508 certs.go:381] copying /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.crt.4dc5a15b -> /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.crt
	I0909 11:42:37.147906  299508 certs.go:385] copying /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.key.4dc5a15b -> /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.key
	I0909 11:42:37.147966  299508 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/proxy-client.key
	I0909 11:42:37.147988  299508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/proxy-client.crt with IP's: []
	I0909 11:42:38.074188  299508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/proxy-client.crt ...
	I0909 11:42:38.074227  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/proxy-client.crt: {Name:mk217697632eca99e9f3ca40c580d8d366e1e881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:38.074436  299508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/proxy-client.key ...
	I0909 11:42:38.074456  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/proxy-client.key: {Name:mk0079fe613e2d8aba3833c6cf13cdc173642ec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:42:38.074663  299508 certs.go:484] found cert: /home/jenkins/minikube-integration/19584-293351/.minikube/certs/ca-key.pem (1679 bytes)
	I0909 11:42:38.074708  299508 certs.go:484] found cert: /home/jenkins/minikube-integration/19584-293351/.minikube/certs/ca.pem (1078 bytes)
	I0909 11:42:38.074747  299508 certs.go:484] found cert: /home/jenkins/minikube-integration/19584-293351/.minikube/certs/cert.pem (1123 bytes)
	I0909 11:42:38.074785  299508 certs.go:484] found cert: /home/jenkins/minikube-integration/19584-293351/.minikube/certs/key.pem (1679 bytes)
	I0909 11:42:38.075537  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0909 11:42:38.105929  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0909 11:42:38.132957  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0909 11:42:38.162916  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0909 11:42:38.187851  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0909 11:42:38.213257  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0909 11:42:38.238680  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0909 11:42:38.263033  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0909 11:42:38.287726  299508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-293351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0909 11:42:38.312107  299508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0909 11:42:38.330593  299508 ssh_runner.go:195] Run: openssl version
	I0909 11:42:38.336389  299508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0909 11:42:38.347009  299508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0909 11:42:38.350795  299508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  9 11:42 /usr/share/ca-certificates/minikubeCA.pem
	I0909 11:42:38.350904  299508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0909 11:42:38.357895  299508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0909 11:42:38.367536  299508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0909 11:42:38.370851  299508 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0909 11:42:38.370928  299508 kubeadm.go:392] StartCluster: {Name:addons-630724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-630724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0909 11:42:38.371028  299508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0909 11:42:38.371102  299508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0909 11:42:38.411753  299508 cri.go:89] found id: ""
	I0909 11:42:38.411830  299508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0909 11:42:38.420840  299508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0909 11:42:38.429837  299508 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0909 11:42:38.429905  299508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0909 11:42:38.438735  299508 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0909 11:42:38.438754  299508 kubeadm.go:157] found existing configuration files:
	
	I0909 11:42:38.438805  299508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0909 11:42:38.447785  299508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0909 11:42:38.447850  299508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0909 11:42:38.456329  299508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0909 11:42:38.465055  299508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0909 11:42:38.465135  299508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0909 11:42:38.473724  299508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0909 11:42:38.482671  299508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0909 11:42:38.482755  299508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0909 11:42:38.491304  299508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0909 11:42:38.500206  299508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0909 11:42:38.500294  299508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0909 11:42:38.508447  299508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0909 11:42:38.552111  299508 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0909 11:42:38.552439  299508 kubeadm.go:310] [preflight] Running pre-flight checks
	I0909 11:42:38.571015  299508 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0909 11:42:38.571091  299508 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0909 11:42:38.571132  299508 kubeadm.go:310] OS: Linux
	I0909 11:42:38.571179  299508 kubeadm.go:310] CGROUPS_CPU: enabled
	I0909 11:42:38.571229  299508 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0909 11:42:38.571279  299508 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0909 11:42:38.571330  299508 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0909 11:42:38.571378  299508 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0909 11:42:38.571447  299508 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0909 11:42:38.571494  299508 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0909 11:42:38.571544  299508 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0909 11:42:38.571592  299508 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0909 11:42:38.639156  299508 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0909 11:42:38.639271  299508 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0909 11:42:38.639666  299508 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0909 11:42:38.645028  299508 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0909 11:42:38.648044  299508 out.go:235]   - Generating certificates and keys ...
	I0909 11:42:38.648235  299508 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0909 11:42:38.648356  299508 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0909 11:42:39.211808  299508 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0909 11:42:39.405451  299508 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0909 11:42:39.769822  299508 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0909 11:42:40.657856  299508 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0909 11:42:41.257478  299508 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0909 11:42:41.257745  299508 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-630724 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0909 11:42:41.771337  299508 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0909 11:42:41.771470  299508 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-630724 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0909 11:42:42.221887  299508 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0909 11:42:43.227376  299508 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0909 11:42:43.673026  299508 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0909 11:42:43.673267  299508 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0909 11:42:43.976333  299508 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0909 11:42:44.732957  299508 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0909 11:42:45.287077  299508 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0909 11:42:45.775464  299508 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0909 11:42:46.375960  299508 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0909 11:42:46.376670  299508 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0909 11:42:46.379680  299508 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0909 11:42:46.382705  299508 out.go:235]   - Booting up control plane ...
	I0909 11:42:46.382811  299508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0909 11:42:46.382920  299508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0909 11:42:46.383007  299508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0909 11:42:46.407323  299508 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0909 11:42:46.414142  299508 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0909 11:42:46.414200  299508 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0909 11:42:46.508900  299508 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0909 11:42:46.509020  299508 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0909 11:42:48.009775  299508 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.50097337s
	I0909 11:42:48.010092  299508 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0909 11:42:54.012034  299508 kubeadm.go:310] [api-check] The API server is healthy after 6.001766554s
	I0909 11:42:54.041608  299508 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0909 11:42:54.075357  299508 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0909 11:42:54.101688  299508 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0909 11:42:54.101899  299508 kubeadm.go:310] [mark-control-plane] Marking the node addons-630724 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0909 11:42:54.113710  299508 kubeadm.go:310] [bootstrap-token] Using token: 8vzcty.j9q0clltylx56b6k
	I0909 11:42:54.116149  299508 out.go:235]   - Configuring RBAC rules ...
	I0909 11:42:54.116284  299508 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0909 11:42:54.123581  299508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0909 11:42:54.132649  299508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0909 11:42:54.136945  299508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0909 11:42:54.140829  299508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0909 11:42:54.145011  299508 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0909 11:42:54.418878  299508 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0909 11:42:54.847994  299508 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0909 11:42:55.419211  299508 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0909 11:42:55.425450  299508 kubeadm.go:310] 
	I0909 11:42:55.425530  299508 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0909 11:42:55.425541  299508 kubeadm.go:310] 
	I0909 11:42:55.425622  299508 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0909 11:42:55.425631  299508 kubeadm.go:310] 
	I0909 11:42:55.425656  299508 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0909 11:42:55.425739  299508 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0909 11:42:55.425797  299508 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0909 11:42:55.425803  299508 kubeadm.go:310] 
	I0909 11:42:55.425856  299508 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0909 11:42:55.425860  299508 kubeadm.go:310] 
	I0909 11:42:55.425907  299508 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0909 11:42:55.425912  299508 kubeadm.go:310] 
	I0909 11:42:55.425967  299508 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0909 11:42:55.426041  299508 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0909 11:42:55.426107  299508 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0909 11:42:55.426112  299508 kubeadm.go:310] 
	I0909 11:42:55.426193  299508 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0909 11:42:55.426267  299508 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0909 11:42:55.426272  299508 kubeadm.go:310] 
	I0909 11:42:55.428418  299508 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8vzcty.j9q0clltylx56b6k \
	I0909 11:42:55.428530  299508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d97303b8dff703c93af99443bb4f4617d91cfbc7f43fe338dca36f5194fc79ea \
	I0909 11:42:55.428551  299508 kubeadm.go:310] 	--control-plane 
	I0909 11:42:55.428556  299508 kubeadm.go:310] 
	I0909 11:42:55.428638  299508 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0909 11:42:55.428642  299508 kubeadm.go:310] 
	I0909 11:42:55.428720  299508 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8vzcty.j9q0clltylx56b6k \
	I0909 11:42:55.428818  299508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d97303b8dff703c93af99443bb4f4617d91cfbc7f43fe338dca36f5194fc79ea 
	I0909 11:42:55.433635  299508 kubeadm.go:310] W0909 11:42:38.548617    1026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0909 11:42:55.433930  299508 kubeadm.go:310] W0909 11:42:38.549656    1026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0909 11:42:55.434141  299508 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0909 11:42:55.434246  299508 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0909 11:42:55.434271  299508 cni.go:84] Creating CNI manager for ""
	I0909 11:42:55.434279  299508 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0909 11:42:55.438188  299508 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0909 11:42:55.440646  299508 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0909 11:42:55.445642  299508 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0909 11:42:55.445667  299508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0909 11:42:55.466164  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0909 11:42:55.758646  299508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0909 11:42:55.758812  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 11:42:55.758902  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-630724 minikube.k8s.io/updated_at=2024_09_09T11_42_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=cf17d6b4040a54caaa170f92a048a513bb2a2b0d minikube.k8s.io/name=addons-630724 minikube.k8s.io/primary=true
	I0909 11:42:55.766822  299508 ops.go:34] apiserver oom_adj: -16
	I0909 11:42:55.944617  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 11:42:56.444733  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 11:42:56.945316  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 11:42:57.445729  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 11:42:57.944755  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 11:42:58.445595  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 11:42:58.945538  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 11:42:59.445476  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 11:42:59.945642  299508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 11:43:00.097723  299508 kubeadm.go:1113] duration metric: took 4.338957155s to wait for elevateKubeSystemPrivileges
	I0909 11:43:00.097757  299508 kubeadm.go:394] duration metric: took 21.726834181s to StartCluster
	I0909 11:43:00.097780  299508 settings.go:142] acquiring lock: {Name:mkc0cf74a30d95d81b37a0b110eb41a0038a2e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:43:00.098482  299508 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19584-293351/kubeconfig
	I0909 11:43:00.098913  299508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-293351/kubeconfig: {Name:mkbeab7fc7181afffa8bad310530f684996f4102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 11:43:00.099267  299508 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0909 11:43:00.099518  299508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0909 11:43:00.100112  299508 config.go:182] Loaded profile config "addons-630724": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0909 11:43:00.100167  299508 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0909 11:43:00.100250  299508 addons.go:69] Setting yakd=true in profile "addons-630724"
	I0909 11:43:00.100273  299508 addons.go:234] Setting addon yakd=true in "addons-630724"
	I0909 11:43:00.100301  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.100799  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.111155  299508 out.go:177] * Verifying Kubernetes components...
	I0909 11:43:00.111605  299508 addons.go:69] Setting inspektor-gadget=true in profile "addons-630724"
	I0909 11:43:00.111636  299508 addons.go:234] Setting addon inspektor-gadget=true in "addons-630724"
	I0909 11:43:00.111674  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.112205  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.125570  299508 addons.go:69] Setting metrics-server=true in profile "addons-630724"
	I0909 11:43:00.125621  299508 addons.go:234] Setting addon metrics-server=true in "addons-630724"
	I0909 11:43:00.125668  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.126152  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.133375  299508 addons.go:69] Setting cloud-spanner=true in profile "addons-630724"
	I0909 11:43:00.133435  299508 addons.go:234] Setting addon cloud-spanner=true in "addons-630724"
	I0909 11:43:00.133480  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.133981  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.157576  299508 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-630724"
	I0909 11:43:00.157632  299508 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-630724"
	I0909 11:43:00.157675  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.158168  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.195154  299508 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-630724"
	I0909 11:43:00.195254  299508 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-630724"
	I0909 11:43:00.195292  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.195817  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.216963  299508 addons.go:69] Setting default-storageclass=true in profile "addons-630724"
	I0909 11:43:00.217268  299508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-630724"
	I0909 11:43:00.232318  299508 addons.go:69] Setting registry=true in profile "addons-630724"
	I0909 11:43:00.232381  299508 addons.go:234] Setting addon registry=true in "addons-630724"
	I0909 11:43:00.232425  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.232926  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.217147  299508 addons.go:69] Setting gcp-auth=true in profile "addons-630724"
	I0909 11:43:00.235296  299508 mustload.go:65] Loading cluster: addons-630724
	I0909 11:43:00.235606  299508 config.go:182] Loaded profile config "addons-630724": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0909 11:43:00.235949  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.217161  299508 addons.go:69] Setting ingress=true in profile "addons-630724"
	I0909 11:43:00.245760  299508 addons.go:234] Setting addon ingress=true in "addons-630724"
	I0909 11:43:00.245848  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.246414  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.217173  299508 addons.go:69] Setting ingress-dns=true in profile "addons-630724"
	I0909 11:43:00.257571  299508 addons.go:234] Setting addon ingress-dns=true in "addons-630724"
	I0909 11:43:00.257737  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.258680  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.258928  299508 addons.go:69] Setting storage-provisioner=true in profile "addons-630724"
	I0909 11:43:00.259006  299508 addons.go:234] Setting addon storage-provisioner=true in "addons-630724"
	I0909 11:43:00.259095  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.275679  299508 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-630724"
	I0909 11:43:00.275796  299508 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-630724"
	I0909 11:43:00.276248  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.276494  299508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0909 11:43:00.285773  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.350658  299508 addons.go:69] Setting volcano=true in profile "addons-630724"
	I0909 11:43:00.350723  299508 addons.go:234] Setting addon volcano=true in "addons-630724"
	I0909 11:43:00.350771  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.351287  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.375547  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.381771  299508 addons.go:69] Setting volumesnapshots=true in profile "addons-630724"
	I0909 11:43:00.381824  299508 addons.go:234] Setting addon volumesnapshots=true in "addons-630724"
	I0909 11:43:00.381873  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.382388  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.480100  299508 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0909 11:43:00.491030  299508 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0909 11:43:00.491081  299508 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0909 11:43:00.550003  299508 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0909 11:43:00.570067  299508 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0909 11:43:00.572571  299508 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0909 11:43:00.572596  299508 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0909 11:43:00.572674  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.572861  299508 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0909 11:43:00.578468  299508 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0909 11:43:00.578494  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0909 11:43:00.578561  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.596814  299508 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0909 11:43:00.596838  299508 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0909 11:43:00.596907  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.600885  299508 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0909 11:43:00.604479  299508 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0909 11:43:00.604514  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0909 11:43:00.604581  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.616096  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.616882  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.644901  299508 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0909 11:43:00.648800  299508 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0909 11:43:00.648873  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0909 11:43:00.648981  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.650836  299508 addons.go:234] Setting addon default-storageclass=true in "addons-630724"
	I0909 11:43:00.650879  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.651313  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.666426  299508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0909 11:43:00.670871  299508 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0909 11:43:00.670969  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0909 11:43:00.671075  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.690887  299508 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0909 11:43:00.670889  299508 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0909 11:43:00.708556  299508 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0909 11:43:00.710807  299508 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0909 11:43:00.712800  299508 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0909 11:43:00.719675  299508 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0909 11:43:00.722679  299508 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0909 11:43:00.722758  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:00.725736  299508 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0909 11:43:00.725983  299508 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0909 11:43:00.730166  299508 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0909 11:43:00.730392  299508 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0909 11:43:00.730406  299508 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0909 11:43:00.730484  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.755004  299508 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0909 11:43:00.757422  299508 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0909 11:43:00.760024  299508 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0909 11:43:00.760047  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0909 11:43:00.760111  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.762143  299508 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0909 11:43:00.762188  299508 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0909 11:43:00.762293  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.777042  299508 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0909 11:43:00.779262  299508 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0909 11:43:00.782937  299508 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0909 11:43:00.789406  299508 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0909 11:43:00.789430  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0909 11:43:00.789499  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.819210  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:00.823281  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:00.831214  299508 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-630724"
	I0909 11:43:00.831259  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:00.831682  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:00.839859  299508 out.go:177]   - Using image docker.io/registry:2.8.3
	I0909 11:43:00.845606  299508 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0909 11:43:00.847861  299508 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0909 11:43:00.847880  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0909 11:43:00.847947  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.851890  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:00.884780  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:00.906322  299508 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0909 11:43:00.906342  299508 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0909 11:43:00.906401  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:00.913423  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:00.921128  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:00.958855  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:00.973306  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:00.988821  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:00.989545  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:01.004834  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	W0909 11:43:01.006044  299508 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0909 11:43:01.006077  299508 retry.go:31] will retry after 268.955619ms: ssh: handshake failed: EOF
	I0909 11:43:01.008623  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:01.021731  299508 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0909 11:43:01.023914  299508 out.go:177]   - Using image docker.io/busybox:stable
	I0909 11:43:01.026892  299508 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0909 11:43:01.026918  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0909 11:43:01.026990  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:01.053689  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	W0909 11:43:01.054692  299508 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0909 11:43:01.054717  299508 retry.go:31] will retry after 134.813462ms: ssh: handshake failed: EOF
	I0909 11:43:01.121976  299508 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.022429595s)
	I0909 11:43:01.122197  299508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0909 11:43:01.122339  299508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0909 11:43:01.389802  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0909 11:43:01.557412  299508 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0909 11:43:01.557440  299508 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0909 11:43:01.589760  299508 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0909 11:43:01.589786  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0909 11:43:01.602951  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0909 11:43:01.652502  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0909 11:43:01.740887  299508 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0909 11:43:01.740921  299508 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0909 11:43:01.745652  299508 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0909 11:43:01.745679  299508 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0909 11:43:01.752123  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0909 11:43:01.788566  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0909 11:43:01.810967  299508 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0909 11:43:01.810993  299508 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0909 11:43:01.818916  299508 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0909 11:43:01.818943  299508 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0909 11:43:01.850744  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0909 11:43:01.869439  299508 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0909 11:43:01.869462  299508 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0909 11:43:01.879437  299508 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0909 11:43:01.879462  299508 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0909 11:43:02.036909  299508 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0909 11:43:02.036937  299508 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0909 11:43:02.097029  299508 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0909 11:43:02.097056  299508 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0909 11:43:02.149525  299508 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0909 11:43:02.149549  299508 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0909 11:43:02.173726  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0909 11:43:02.197648  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0909 11:43:02.207993  299508 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0909 11:43:02.208022  299508 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0909 11:43:02.227309  299508 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0909 11:43:02.227339  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0909 11:43:02.280565  299508 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0909 11:43:02.280593  299508 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0909 11:43:02.458438  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0909 11:43:02.497191  299508 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0909 11:43:02.497224  299508 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0909 11:43:02.505310  299508 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0909 11:43:02.505412  299508 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0909 11:43:02.557231  299508 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0909 11:43:02.557257  299508 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0909 11:43:02.597564  299508 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0909 11:43:02.597595  299508 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0909 11:43:02.611596  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0909 11:43:02.911437  299508 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0909 11:43:02.911465  299508 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0909 11:43:02.938427  299508 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0909 11:43:02.938455  299508 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0909 11:43:02.955647  299508 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0909 11:43:02.955676  299508 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0909 11:43:03.102646  299508 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0909 11:43:03.102670  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0909 11:43:03.265430  299508 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0909 11:43:03.265458  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0909 11:43:03.296974  299508 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0909 11:43:03.296999  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0909 11:43:03.432720  299508 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0909 11:43:03.432748  299508 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0909 11:43:03.480776  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0909 11:43:03.589700  299508 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.467312193s)
	I0909 11:43:03.589799  299508 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.467563449s)
	I0909 11:43:03.589821  299508 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0909 11:43:03.589888  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.200057004s)
	I0909 11:43:03.591606  299508 node_ready.go:35] waiting up to 6m0s for node "addons-630724" to be "Ready" ...
	I0909 11:43:03.600386  299508 node_ready.go:49] node "addons-630724" has status "Ready":"True"
	I0909 11:43:03.600411  299508 node_ready.go:38] duration metric: took 8.663302ms for node "addons-630724" to be "Ready" ...
	I0909 11:43:03.600421  299508 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0909 11:43:03.626012  299508 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7w27w" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:03.776153  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0909 11:43:03.788321  299508 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0909 11:43:03.788387  299508 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0909 11:43:03.793317  299508 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0909 11:43:03.793409  299508 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0909 11:43:04.098202  299508 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-630724" context rescaled to 1 replicas
	I0909 11:43:04.117097  299508 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0909 11:43:04.117175  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0909 11:43:04.120805  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.517813985s)
	I0909 11:43:04.129901  299508 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-7w27w" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-7w27w" not found
	I0909 11:43:04.129971  299508 pod_ready.go:82] duration metric: took 503.878608ms for pod "coredns-6f6b679f8f-7w27w" in "kube-system" namespace to be "Ready" ...
	E0909 11:43:04.129998  299508 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-7w27w" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-7w27w" not found
	I0909 11:43:04.130017  299508 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:04.184074  299508 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0909 11:43:04.184152  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0909 11:43:04.196931  299508 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0909 11:43:04.196996  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0909 11:43:04.365649  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0909 11:43:04.437797  299508 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0909 11:43:04.437876  299508 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0909 11:43:04.766738  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.11419906s)
	I0909 11:43:04.802771  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0909 11:43:06.140746  299508 pod_ready.go:103] pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace has status "Ready":"False"
	I0909 11:43:07.839758  299508 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0909 11:43:07.839882  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:07.865565  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:08.253174  299508 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0909 11:43:08.459174  299508 addons.go:234] Setting addon gcp-auth=true in "addons-630724"
	I0909 11:43:08.459269  299508 host.go:66] Checking if "addons-630724" exists ...
	I0909 11:43:08.459815  299508 cli_runner.go:164] Run: docker container inspect addons-630724 --format={{.State.Status}}
	I0909 11:43:08.485133  299508 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0909 11:43:08.485189  299508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630724
	I0909 11:43:08.508504  299508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/addons-630724/id_rsa Username:docker}
	I0909 11:43:08.637602  299508 pod_ready.go:103] pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace has status "Ready":"False"
	I0909 11:43:10.642636  299508 pod_ready.go:103] pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace has status "Ready":"False"
	I0909 11:43:11.238514  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.486353441s)
	I0909 11:43:11.238652  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.450063986s)
	I0909 11:43:11.238669  299508 addons.go:475] Verifying addon ingress=true in "addons-630724"
	I0909 11:43:11.238859  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.388082634s)
	I0909 11:43:11.238910  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.065156021s)
	I0909 11:43:11.239154  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.041480318s)
	I0909 11:43:11.239220  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.780743643s)
	I0909 11:43:11.239242  299508 addons.go:475] Verifying addon metrics-server=true in "addons-630724"
	I0909 11:43:11.239267  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.627645131s)
	I0909 11:43:11.239278  299508 addons.go:475] Verifying addon registry=true in "addons-630724"
	I0909 11:43:11.239286  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.758483659s)
	I0909 11:43:11.239548  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.463312313s)
	W0909 11:43:11.239578  299508 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0909 11:43:11.239609  299508 retry.go:31] will retry after 263.912122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0909 11:43:11.239676  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.873948803s)
	I0909 11:43:11.241567  299508 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-630724 service yakd-dashboard -n yakd-dashboard
	
	I0909 11:43:11.241622  299508 out.go:177] * Verifying registry addon...
	I0909 11:43:11.241600  299508 out.go:177] * Verifying ingress addon...
	I0909 11:43:11.245680  299508 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0909 11:43:11.246708  299508 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0909 11:43:11.323497  299508 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0909 11:43:11.323527  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:11.324617  299508 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0909 11:43:11.324652  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0909 11:43:11.367924  299508 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0909 11:43:11.504635  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0909 11:43:11.812856  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:11.814362  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:12.075992  299508 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.590806581s)
	I0909 11:43:12.076099  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.273293279s)
	I0909 11:43:12.076123  299508 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-630724"
	I0909 11:43:12.082390  299508 out.go:177] * Verifying csi-hostpath-driver addon...
	I0909 11:43:12.082526  299508 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0909 11:43:12.092738  299508 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0909 11:43:12.095250  299508 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0909 11:43:12.097399  299508 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0909 11:43:12.097477  299508 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0909 11:43:12.103630  299508 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0909 11:43:12.103804  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:12.164374  299508 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0909 11:43:12.164401  299508 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0909 11:43:12.217611  299508 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0909 11:43:12.217638  299508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0909 11:43:12.252585  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:12.253832  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:12.258134  299508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0909 11:43:12.598648  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:12.754212  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:12.754828  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:13.100503  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:13.149755  299508 pod_ready.go:103] pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace has status "Ready":"False"
	I0909 11:43:13.255763  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:13.263102  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:13.442497  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.937796432s)
	I0909 11:43:13.442565  299508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.184342054s)
	I0909 11:43:13.446153  299508 addons.go:475] Verifying addon gcp-auth=true in "addons-630724"
	I0909 11:43:13.448875  299508 out.go:177] * Verifying gcp-auth addon...
	I0909 11:43:13.452009  299508 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0909 11:43:13.455210  299508 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0909 11:43:13.598480  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:13.753867  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:13.755055  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:14.098775  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:14.252556  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:14.253987  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:14.608858  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:14.752084  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:14.753576  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:15.099504  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:15.252483  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:15.252642  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:15.598338  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:15.639415  299508 pod_ready.go:103] pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace has status "Ready":"False"
	I0909 11:43:15.751462  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:15.752127  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:16.099351  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:16.252869  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:16.253598  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:16.598636  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:16.752558  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:16.753133  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:17.097956  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:17.250311  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:17.252408  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:17.598893  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:17.751342  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:17.752272  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:18.099200  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:18.140569  299508 pod_ready.go:103] pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace has status "Ready":"False"
	I0909 11:43:18.250448  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:18.250892  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:18.597985  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:18.751576  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:18.752500  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:19.099760  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:19.251287  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:19.251401  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:19.598065  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:19.750754  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:19.752331  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:20.110105  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:20.270309  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:20.277139  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:20.598257  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:20.640790  299508 pod_ready.go:103] pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace has status "Ready":"False"
	I0909 11:43:20.751709  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:20.753685  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:21.098869  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:21.250194  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:21.250994  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:21.597275  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:21.750651  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:21.752263  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:22.098139  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:22.250478  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:22.251223  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:22.598681  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:22.749840  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:22.751844  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:23.098005  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:23.136610  299508 pod_ready.go:103] pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace has status "Ready":"False"
	I0909 11:43:23.249487  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:23.250455  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:23.598221  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:23.751524  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:23.752413  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:24.100541  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:24.253370  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:24.254589  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:24.598193  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:24.637057  299508 pod_ready.go:93] pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace has status "Ready":"True"
	I0909 11:43:24.637082  299508 pod_ready.go:82] duration metric: took 20.507028612s for pod "coredns-6f6b679f8f-zj4kb" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:24.637094  299508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-630724" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:24.642757  299508 pod_ready.go:93] pod "etcd-addons-630724" in "kube-system" namespace has status "Ready":"True"
	I0909 11:43:24.642782  299508 pod_ready.go:82] duration metric: took 5.68056ms for pod "etcd-addons-630724" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:24.642796  299508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-630724" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:24.649022  299508 pod_ready.go:93] pod "kube-apiserver-addons-630724" in "kube-system" namespace has status "Ready":"True"
	I0909 11:43:24.649095  299508 pod_ready.go:82] duration metric: took 6.290419ms for pod "kube-apiserver-addons-630724" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:24.649122  299508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-630724" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:24.656152  299508 pod_ready.go:93] pod "kube-controller-manager-addons-630724" in "kube-system" namespace has status "Ready":"True"
	I0909 11:43:24.656223  299508 pod_ready.go:82] duration metric: took 7.079822ms for pod "kube-controller-manager-addons-630724" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:24.656248  299508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5gj4z" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:24.663139  299508 pod_ready.go:93] pod "kube-proxy-5gj4z" in "kube-system" namespace has status "Ready":"True"
	I0909 11:43:24.663208  299508 pod_ready.go:82] duration metric: took 6.941534ms for pod "kube-proxy-5gj4z" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:24.663234  299508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-630724" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:24.751877  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:24.754702  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:25.100448  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:25.100681  299508 pod_ready.go:93] pod "kube-scheduler-addons-630724" in "kube-system" namespace has status "Ready":"True"
	I0909 11:43:25.100695  299508 pod_ready.go:82] duration metric: took 437.442585ms for pod "kube-scheduler-addons-630724" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:25.100708  299508 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rdh99" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:25.250314  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:25.253752  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:25.598139  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:25.752452  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:25.753462  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:26.098926  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:26.251748  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:26.254248  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:26.598231  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:26.749778  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:26.752667  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:27.097814  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:27.109712  299508 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rdh99" in "kube-system" namespace has status "Ready":"False"
	I0909 11:43:27.250796  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:27.251596  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:27.597396  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:27.754417  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:27.755685  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:28.098262  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:28.251889  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:28.252769  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:28.600304  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:28.612860  299508 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rdh99" in "kube-system" namespace has status "Ready":"True"
	I0909 11:43:28.612932  299508 pod_ready.go:82] duration metric: took 3.512214453s for pod "nvidia-device-plugin-daemonset-rdh99" in "kube-system" namespace to be "Ready" ...
	I0909 11:43:28.612958  299508 pod_ready.go:39] duration metric: took 25.012525381s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0909 11:43:28.613013  299508 api_server.go:52] waiting for apiserver process to appear ...
	I0909 11:43:28.613130  299508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0909 11:43:28.632827  299508 api_server.go:72] duration metric: took 28.53351033s to wait for apiserver process to appear ...
	I0909 11:43:28.632857  299508 api_server.go:88] waiting for apiserver healthz status ...
	I0909 11:43:28.632878  299508 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0909 11:43:28.641394  299508 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0909 11:43:28.642667  299508 api_server.go:141] control plane version: v1.31.0
	I0909 11:43:28.642704  299508 api_server.go:131] duration metric: took 9.831513ms to wait for apiserver health ...
	I0909 11:43:28.642713  299508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0909 11:43:28.656148  299508 system_pods.go:59] 18 kube-system pods found
	I0909 11:43:28.656191  299508 system_pods.go:61] "coredns-6f6b679f8f-zj4kb" [29e61772-58b9-4b6d-88fd-d8ce4ba0b38a] Running
	I0909 11:43:28.656202  299508 system_pods.go:61] "csi-hostpath-attacher-0" [3842728e-66ca-422e-acde-133bae99d3c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0909 11:43:28.656212  299508 system_pods.go:61] "csi-hostpath-resizer-0" [97de549f-71e2-409c-9da8-42b295dc5a57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0909 11:43:28.656220  299508 system_pods.go:61] "csi-hostpathplugin-s47vz" [2734ebd9-ff76-48e1-a4ea-0e597198dc62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0909 11:43:28.656226  299508 system_pods.go:61] "etcd-addons-630724" [cff444ae-d27c-41dd-a984-b5b198026ebc] Running
	I0909 11:43:28.656232  299508 system_pods.go:61] "kindnet-xkh4c" [7b75dbb8-b91e-435a-9b72-fa4983916f50] Running
	I0909 11:43:28.656239  299508 system_pods.go:61] "kube-apiserver-addons-630724" [8e2d4657-514c-4cec-b5af-72f405523d53] Running
	I0909 11:43:28.656247  299508 system_pods.go:61] "kube-controller-manager-addons-630724" [9bbd5362-5aec-493f-9102-041a680f0cb2] Running
	I0909 11:43:28.656252  299508 system_pods.go:61] "kube-ingress-dns-minikube" [a1b0cae1-6265-4c78-a01e-0caddeaf2dc5] Running
	I0909 11:43:28.656263  299508 system_pods.go:61] "kube-proxy-5gj4z" [787b1893-00ed-425e-b052-e07f74f62a36] Running
	I0909 11:43:28.656267  299508 system_pods.go:61] "kube-scheduler-addons-630724" [cf7047ba-483f-4d7f-8e62-808d55c508a3] Running
	I0909 11:43:28.656273  299508 system_pods.go:61] "metrics-server-84c5f94fbc-mtk8x" [d3a4f297-8b21-4bf6-b16d-87007ad009c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0909 11:43:28.656283  299508 system_pods.go:61] "nvidia-device-plugin-daemonset-rdh99" [2c40ce95-e2f5-4194-a39f-80ddedabf707] Running
	I0909 11:43:28.656292  299508 system_pods.go:61] "registry-6fb4cdfc84-mr9ck" [e7dd8cff-56cc-4632-a210-f2a55ade65eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0909 11:43:28.656303  299508 system_pods.go:61] "registry-proxy-dm8pk" [a039b26f-4cfc-480f-9f4f-bf39b72b5d47] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0909 11:43:28.656311  299508 system_pods.go:61] "snapshot-controller-56fcc65765-8542r" [cafb2f79-0974-4e97-aa7a-d8d589bbd43f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0909 11:43:28.656323  299508 system_pods.go:61] "snapshot-controller-56fcc65765-pf2x7" [639db539-a517-4a95-8696-5635904081e5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0909 11:43:28.656327  299508 system_pods.go:61] "storage-provisioner" [5845b8e9-9588-46a8-9800-6ecd13c0c585] Running
	I0909 11:43:28.656335  299508 system_pods.go:74] duration metric: took 13.615805ms to wait for pod list to return data ...
	I0909 11:43:28.656347  299508 default_sa.go:34] waiting for default service account to be created ...
	I0909 11:43:28.659308  299508 default_sa.go:45] found service account: "default"
	I0909 11:43:28.659340  299508 default_sa.go:55] duration metric: took 2.987275ms for default service account to be created ...
	I0909 11:43:28.659350  299508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0909 11:43:28.751743  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:28.752270  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:28.841284  299508 system_pods.go:86] 18 kube-system pods found
	I0909 11:43:28.841322  299508 system_pods.go:89] "coredns-6f6b679f8f-zj4kb" [29e61772-58b9-4b6d-88fd-d8ce4ba0b38a] Running
	I0909 11:43:28.841344  299508 system_pods.go:89] "csi-hostpath-attacher-0" [3842728e-66ca-422e-acde-133bae99d3c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0909 11:43:28.841352  299508 system_pods.go:89] "csi-hostpath-resizer-0" [97de549f-71e2-409c-9da8-42b295dc5a57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0909 11:43:28.841360  299508 system_pods.go:89] "csi-hostpathplugin-s47vz" [2734ebd9-ff76-48e1-a4ea-0e597198dc62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0909 11:43:28.841364  299508 system_pods.go:89] "etcd-addons-630724" [cff444ae-d27c-41dd-a984-b5b198026ebc] Running
	I0909 11:43:28.841369  299508 system_pods.go:89] "kindnet-xkh4c" [7b75dbb8-b91e-435a-9b72-fa4983916f50] Running
	I0909 11:43:28.841374  299508 system_pods.go:89] "kube-apiserver-addons-630724" [8e2d4657-514c-4cec-b5af-72f405523d53] Running
	I0909 11:43:28.841378  299508 system_pods.go:89] "kube-controller-manager-addons-630724" [9bbd5362-5aec-493f-9102-041a680f0cb2] Running
	I0909 11:43:28.841383  299508 system_pods.go:89] "kube-ingress-dns-minikube" [a1b0cae1-6265-4c78-a01e-0caddeaf2dc5] Running
	I0909 11:43:28.841387  299508 system_pods.go:89] "kube-proxy-5gj4z" [787b1893-00ed-425e-b052-e07f74f62a36] Running
	I0909 11:43:28.841391  299508 system_pods.go:89] "kube-scheduler-addons-630724" [cf7047ba-483f-4d7f-8e62-808d55c508a3] Running
	I0909 11:43:28.841398  299508 system_pods.go:89] "metrics-server-84c5f94fbc-mtk8x" [d3a4f297-8b21-4bf6-b16d-87007ad009c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0909 11:43:28.841407  299508 system_pods.go:89] "nvidia-device-plugin-daemonset-rdh99" [2c40ce95-e2f5-4194-a39f-80ddedabf707] Running
	I0909 11:43:28.841415  299508 system_pods.go:89] "registry-6fb4cdfc84-mr9ck" [e7dd8cff-56cc-4632-a210-f2a55ade65eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0909 11:43:28.841426  299508 system_pods.go:89] "registry-proxy-dm8pk" [a039b26f-4cfc-480f-9f4f-bf39b72b5d47] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0909 11:43:28.841433  299508 system_pods.go:89] "snapshot-controller-56fcc65765-8542r" [cafb2f79-0974-4e97-aa7a-d8d589bbd43f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0909 11:43:28.841440  299508 system_pods.go:89] "snapshot-controller-56fcc65765-pf2x7" [639db539-a517-4a95-8696-5635904081e5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0909 11:43:28.841445  299508 system_pods.go:89] "storage-provisioner" [5845b8e9-9588-46a8-9800-6ecd13c0c585] Running
	I0909 11:43:28.841453  299508 system_pods.go:126] duration metric: took 182.096381ms to wait for k8s-apps to be running ...
	I0909 11:43:28.841461  299508 system_svc.go:44] waiting for kubelet service to be running ....
	I0909 11:43:28.841520  299508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 11:43:28.855696  299508 system_svc.go:56] duration metric: took 14.223867ms WaitForService to wait for kubelet
	I0909 11:43:28.855728  299508 kubeadm.go:582] duration metric: took 28.756430101s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0909 11:43:28.855750  299508 node_conditions.go:102] verifying NodePressure condition ...
	I0909 11:43:29.048016  299508 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0909 11:43:29.048106  299508 node_conditions.go:123] node cpu capacity is 2
	I0909 11:43:29.048135  299508 node_conditions.go:105] duration metric: took 192.377837ms to run NodePressure ...
	I0909 11:43:29.048161  299508 start.go:241] waiting for startup goroutines ...
	I0909 11:43:29.114835  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:29.276595  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:29.277078  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:29.598201  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:29.750802  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:29.752035  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:30.120673  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:30.250499  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:30.253576  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:30.598447  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:30.754034  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:30.755017  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:31.098350  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:31.255292  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:31.256434  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:31.597704  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:31.750113  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:31.750796  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:32.098268  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:32.253184  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:32.254352  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:32.599172  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:32.752639  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:32.753705  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:33.098419  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:33.251017  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:33.252019  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:33.597430  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:33.751269  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:33.752178  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:34.098702  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:34.251397  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:34.253718  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:34.598933  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:34.751642  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:34.752913  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:35.098441  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:35.250831  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:35.252897  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:35.598367  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:35.752718  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:35.753232  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:36.103083  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:36.261369  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:36.261513  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:36.598464  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:36.752386  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:36.753766  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:37.098575  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:37.252653  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:37.254790  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:37.598786  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:37.750539  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 11:43:37.752459  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:38.098711  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:38.251299  299508 kapi.go:107] duration metric: took 27.004592251s to wait for kubernetes.io/minikube-addons=registry ...
	I0909 11:43:38.254299  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:38.598461  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:38.751218  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:39.098573  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:39.252410  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:39.598189  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:39.751168  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:40.105483  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:40.250671  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:40.601227  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:40.751238  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:41.097813  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:41.253487  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:41.599382  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:41.751254  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:42.098049  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:42.254809  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:42.597976  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:42.750266  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:43.100806  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:43.251603  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:43.599175  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:43.765970  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:44.098284  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:44.264635  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:44.602501  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:44.750476  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:45.106947  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:45.269618  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:45.598248  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:45.753605  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:46.098168  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:46.250988  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:46.598076  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:46.752184  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:47.098014  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:47.249927  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:47.600604  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:47.751337  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:48.098312  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:48.250692  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:48.598916  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:48.750578  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:49.097677  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:49.250233  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:49.598904  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:49.751216  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:50.105630  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:50.249740  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:50.597826  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:50.751116  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:51.098202  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:51.258296  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:51.598790  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:51.752810  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:52.101627  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:52.250310  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:52.598573  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:52.769406  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:53.098259  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:53.251711  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:53.598922  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:53.752022  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:54.099751  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:54.252837  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:54.598408  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:54.751771  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:55.099140  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:55.250317  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:55.597915  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:55.750130  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:56.099593  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:56.252119  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:56.597802  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:56.750769  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:57.098371  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:57.250639  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:57.598046  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:57.754980  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:58.098395  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:58.250812  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:58.597809  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:58.750665  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:59.097898  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:59.250749  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:43:59.598612  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:43:59.750349  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:00.128244  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:00.259449  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:00.600821  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:00.753013  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:01.098920  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:01.251274  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:01.598559  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:01.755152  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:02.098997  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:02.252172  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:02.598328  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:02.751717  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:03.097627  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:03.251464  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:03.598919  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:03.750744  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:04.099273  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:04.254968  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:04.598605  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:04.749717  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:05.098444  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:05.250948  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:05.598467  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:05.751665  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:06.098419  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 11:44:06.250945  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:06.599195  299508 kapi.go:107] duration metric: took 54.506455267s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0909 11:44:06.750646  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:07.250597  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:07.750475  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:08.250641  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:08.749793  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:09.250720  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:09.750299  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:10.251047  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:10.750255  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:11.251596  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:11.749828  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:12.250234  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:12.750256  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:13.251197  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:13.750196  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:14.251502  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:14.749901  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:15.251560  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:15.750868  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:16.250863  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:16.751179  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:17.250168  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:17.750500  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:18.251045  299508 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 11:44:18.750033  299508 kapi.go:107] duration metric: took 1m7.504347264s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0909 11:44:35.457712  299508 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0909 11:44:35.457743  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:35.956363  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:36.455894  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:36.957583  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:37.455345  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:37.958305  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:38.456340  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:38.956487  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:39.456202  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:39.955290  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:40.455999  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:40.956657  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:41.455791  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:41.955675  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:42.455889  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:42.956115  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:43.456066  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:43.956147  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:44.455675  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:44.956012  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:45.455605  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:45.955841  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:46.455563  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:46.958517  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:47.456966  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:47.955321  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:48.456208  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:48.955224  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:49.457576  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:49.959605  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:50.455377  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:50.955071  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:51.456066  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:51.955751  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:52.455860  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:52.956104  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:53.456491  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:53.956777  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:54.455227  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:54.955538  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:55.455923  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:55.955392  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:56.456902  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:56.956052  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:57.456008  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:57.956601  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:58.455567  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:58.956048  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:59.456743  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:44:59.956012  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:00.457083  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:00.956650  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:01.457577  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:01.955442  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:02.456137  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:02.955894  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:03.456077  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:03.957261  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:04.455152  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:04.956643  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:05.456236  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:05.956319  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:06.457016  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:06.960450  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:07.456897  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:07.955742  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:08.455932  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:08.955709  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:09.455985  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:09.955298  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:10.455968  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:10.955869  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:11.455550  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:11.955405  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:12.455688  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:12.955948  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:13.456025  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:13.956048  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:14.455213  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:14.955852  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:15.455897  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:15.955169  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:16.455990  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:16.958235  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:17.456380  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:17.956672  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:18.455734  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:18.962266  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:19.459341  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:19.956368  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:20.456414  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:20.955173  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:21.455653  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:21.954956  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:22.456243  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:22.955595  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:23.456625  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:23.955875  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:24.455713  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:24.955508  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:25.455911  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:25.955674  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:26.455635  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:26.955449  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:27.455810  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:27.955803  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:28.456186  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:28.955212  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:29.455949  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:29.955316  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:30.455395  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:30.955160  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:31.455567  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:31.955199  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:32.455832  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:32.955352  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:33.456597  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:33.955628  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:34.455401  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:34.955643  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:35.456385  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:35.956174  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:36.456461  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:36.957571  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:37.455472  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:37.956538  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:38.455845  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:38.955079  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:39.455625  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:39.956157  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:40.455877  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:40.956075  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:41.455988  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:41.955783  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:42.456399  299508 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 11:45:42.956443  299508 kapi.go:107] duration metric: took 2m29.50443355s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0909 11:45:42.958903  299508 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-630724 cluster.
	I0909 11:45:42.961055  299508 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0909 11:45:42.963079  299508 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0909 11:45:42.965292  299508 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, volcano, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0909 11:45:42.967280  299508 addons.go:510] duration metric: took 2m42.867117127s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner volcano ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0909 11:45:42.967349  299508 start.go:246] waiting for cluster config update ...
	I0909 11:45:42.967380  299508 start.go:255] writing updated cluster config ...
	I0909 11:45:42.968033  299508 ssh_runner.go:195] Run: rm -f paused
	I0909 11:45:43.318269  299508 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0909 11:45:43.320230  299508 out.go:177] * Done! kubectl is now configured to use "addons-630724" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b2b4400ec48f6       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   0edb93973fbff       gadget-25vg2
	be89f95d4432a       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   4a2c720d08d32       gcp-auth-89d5ffd79-vmb4s
	c84d82deeb16d       8b46b1cd48760       4 minutes ago       Running             admission                                0                   a6c354848dd67       volcano-admission-77d7d48b68-zbqv2
	78a237e5b91c6       289a818c8d9c5       4 minutes ago       Running             controller                               0                   67e3bde6fe26d       ingress-nginx-controller-bc57996ff-zhr58
	56587e51e9c4d       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   8a95f2f95600f       csi-hostpathplugin-s47vz
	bdcb8d6236005       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   8a95f2f95600f       csi-hostpathplugin-s47vz
	c795dd7a9c4f0       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   8a95f2f95600f       csi-hostpathplugin-s47vz
	77f5f00608d9a       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   8a95f2f95600f       csi-hostpathplugin-s47vz
	94dd9213a8d77       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   8a95f2f95600f       csi-hostpathplugin-s47vz
	1b6c908989f70       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   dde478652a045       csi-hostpath-resizer-0
	81f73590b5050       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   aabd6fc69b3bf       csi-hostpath-attacher-0
	f12ed2f780d33       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   8a95f2f95600f       csi-hostpathplugin-s47vz
	70b1fd29ef0b0       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   5949e27429311       volcano-scheduler-576bc46687-tn2vq
	82ead50e701be       420193b27261a       5 minutes ago       Exited              patch                                    0                   ea32b92174eb5       ingress-nginx-admission-patch-z4gbs
	ca39cd378cd21       420193b27261a       5 minutes ago       Exited              create                                   0                   80ddcb660a602       ingress-nginx-admission-create-p8lvw
	079c324fa1f45       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   f72c33de2434e       volcano-controllers-56675bb4d5-zg8pg
	f9e7ed150c683       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   beb6ef54121f2       metrics-server-84c5f94fbc-mtk8x
	fc2e2c473ef38       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   a900d12e32c42       snapshot-controller-56fcc65765-pf2x7
	6ef9a9b34fc18       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   5ab5181999475       snapshot-controller-56fcc65765-8542r
	13d712dce18a7       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   24c5b092fe61f       local-path-provisioner-86d989889c-zxx64
	9633b37c7e2fa       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   00712839bb5e7       cloud-spanner-emulator-769b77f747-ms8dm
	d512c42317707       6fed88f43b276       5 minutes ago       Running             registry                                 0                   a88b5c7411342       registry-6fb4cdfc84-mr9ck
	0d2d4bcd329c9       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   9ff6f809716d3       registry-proxy-dm8pk
	564c5d70fcc6c       77bdba588b953       5 minutes ago       Running             yakd                                     0                   fb6c2379818d6       yakd-dashboard-67d98fc6b-54qsf
	b2d665c9a8b51       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   c85f492728c16       nvidia-device-plugin-daemonset-rdh99
	ae0b843281229       2437cf7621777       5 minutes ago       Running             coredns                                  0                   81ad3fdc2ee81       coredns-6f6b679f8f-zj4kb
	ae5de811bd19d       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   dcc56d83812ef       kube-ingress-dns-minikube
	7df1caab246cb       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   346e39fa22b4a       storage-provisioner
	b7e50e771b064       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   6a9bc20a3d1f8       kindnet-xkh4c
	49c6943beea95       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   97937ed199174       kube-proxy-5gj4z
	36a5807fddd61       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   5b6cf3ec76e14       kube-scheduler-addons-630724
	3094c2a2bfafb       27e3830e14027       6 minutes ago       Running             etcd                                     0                   93f6047267051       etcd-addons-630724
	1a42dda2b9553       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   08685c88590e1       kube-controller-manager-addons-630724
	51b6d3c7a0c6d       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   ca2741e9e1426       kube-apiserver-addons-630724
	
	
	==> containerd <==
	Sep 09 11:45:54 addons-630724 containerd[816]: time="2024-09-09T11:45:54.835017728Z" level=info msg="RemovePodSandbox \"371c40b1c26abaecffe923ad6235e8cec2c0a4d5131ac2137584f4f965e2460e\" returns successfully"
	Sep 09 11:46:43 addons-630724 containerd[816]: time="2024-09-09T11:46:43.754347161Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 09 11:46:43 addons-630724 containerd[816]: time="2024-09-09T11:46:43.874681363Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 09 11:46:43 addons-630724 containerd[816]: time="2024-09-09T11:46:43.876260912Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 09 11:46:43 addons-630724 containerd[816]: time="2024-09-09T11:46:43.879769457Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 125.371031ms"
	Sep 09 11:46:43 addons-630724 containerd[816]: time="2024-09-09T11:46:43.879816079Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 09 11:46:43 addons-630724 containerd[816]: time="2024-09-09T11:46:43.882064213Z" level=info msg="CreateContainer within sandbox \"0edb93973fbffee25b8769f80c7bc27d7df46a9ff431cd8e4b33f40d73a6ad48\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 09 11:46:43 addons-630724 containerd[816]: time="2024-09-09T11:46:43.898580638Z" level=info msg="CreateContainer within sandbox \"0edb93973fbffee25b8769f80c7bc27d7df46a9ff431cd8e4b33f40d73a6ad48\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4\""
	Sep 09 11:46:43 addons-630724 containerd[816]: time="2024-09-09T11:46:43.899416008Z" level=info msg="StartContainer for \"b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4\""
	Sep 09 11:46:43 addons-630724 containerd[816]: time="2024-09-09T11:46:43.981802598Z" level=info msg="StartContainer for \"b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4\" returns successfully"
	Sep 09 11:46:45 addons-630724 containerd[816]: time="2024-09-09T11:46:45.862296335Z" level=info msg="shim disconnected" id=b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4 namespace=k8s.io
	Sep 09 11:46:45 addons-630724 containerd[816]: time="2024-09-09T11:46:45.862365258Z" level=warning msg="cleaning up after shim disconnected" id=b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4 namespace=k8s.io
	Sep 09 11:46:45 addons-630724 containerd[816]: time="2024-09-09T11:46:45.862378263Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 09 11:46:46 addons-630724 containerd[816]: time="2024-09-09T11:46:46.041297079Z" level=info msg="RemoveContainer for \"e0fb7aebe0a58308786aae25410a315ae23286dd34710451c1a641030196d8c4\""
	Sep 09 11:46:46 addons-630724 containerd[816]: time="2024-09-09T11:46:46.055582494Z" level=info msg="RemoveContainer for \"e0fb7aebe0a58308786aae25410a315ae23286dd34710451c1a641030196d8c4\" returns successfully"
	Sep 09 11:46:54 addons-630724 containerd[816]: time="2024-09-09T11:46:54.839376802Z" level=info msg="RemoveContainer for \"34f360fed2a60dd7bcba2def749cc2273b313c85bdc3dd0af6b15e3f1f504d22\""
	Sep 09 11:46:54 addons-630724 containerd[816]: time="2024-09-09T11:46:54.845694097Z" level=info msg="RemoveContainer for \"34f360fed2a60dd7bcba2def749cc2273b313c85bdc3dd0af6b15e3f1f504d22\" returns successfully"
	Sep 09 11:46:54 addons-630724 containerd[816]: time="2024-09-09T11:46:54.847768488Z" level=info msg="StopPodSandbox for \"2993acc78a0ad37ccc1068a0f4cf571445bddf2814c61956a81d6844f3babfdc\""
	Sep 09 11:46:54 addons-630724 containerd[816]: time="2024-09-09T11:46:54.855477665Z" level=info msg="TearDown network for sandbox \"2993acc78a0ad37ccc1068a0f4cf571445bddf2814c61956a81d6844f3babfdc\" successfully"
	Sep 09 11:46:54 addons-630724 containerd[816]: time="2024-09-09T11:46:54.855518108Z" level=info msg="StopPodSandbox for \"2993acc78a0ad37ccc1068a0f4cf571445bddf2814c61956a81d6844f3babfdc\" returns successfully"
	Sep 09 11:46:54 addons-630724 containerd[816]: time="2024-09-09T11:46:54.856101229Z" level=info msg="RemovePodSandbox for \"2993acc78a0ad37ccc1068a0f4cf571445bddf2814c61956a81d6844f3babfdc\""
	Sep 09 11:46:54 addons-630724 containerd[816]: time="2024-09-09T11:46:54.856153832Z" level=info msg="Forcibly stopping sandbox \"2993acc78a0ad37ccc1068a0f4cf571445bddf2814c61956a81d6844f3babfdc\""
	Sep 09 11:46:54 addons-630724 containerd[816]: time="2024-09-09T11:46:54.863701245Z" level=info msg="TearDown network for sandbox \"2993acc78a0ad37ccc1068a0f4cf571445bddf2814c61956a81d6844f3babfdc\" successfully"
	Sep 09 11:46:54 addons-630724 containerd[816]: time="2024-09-09T11:46:54.870248874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2993acc78a0ad37ccc1068a0f4cf571445bddf2814c61956a81d6844f3babfdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 09 11:46:54 addons-630724 containerd[816]: time="2024-09-09T11:46:54.870372820Z" level=info msg="RemovePodSandbox \"2993acc78a0ad37ccc1068a0f4cf571445bddf2814c61956a81d6844f3babfdc\" returns successfully"
	
	
	==> coredns [ae0b84328122953d7acd982d68ad712c21c98c324056c819fba660c9c43f6c33] <==
	[INFO] 10.244.0.4:50788 - 13263 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074461s
	[INFO] 10.244.0.4:46141 - 59359 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002052264s
	[INFO] 10.244.0.4:46141 - 31192 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002377054s
	[INFO] 10.244.0.4:60188 - 20794 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000056919s
	[INFO] 10.244.0.4:60188 - 12084 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000045965s
	[INFO] 10.244.0.4:57678 - 26283 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000135006s
	[INFO] 10.244.0.4:57678 - 62900 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000054194s
	[INFO] 10.244.0.4:59523 - 48472 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062334s
	[INFO] 10.244.0.4:59523 - 40533 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000045021s
	[INFO] 10.244.0.4:57820 - 65262 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061907s
	[INFO] 10.244.0.4:57820 - 7660 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035996s
	[INFO] 10.244.0.4:35713 - 2123 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002392201s
	[INFO] 10.244.0.4:35713 - 34381 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002618924s
	[INFO] 10.244.0.4:56198 - 32664 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079991s
	[INFO] 10.244.0.4:56198 - 45723 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071212s
	[INFO] 10.244.0.24:48977 - 20517 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004691268s
	[INFO] 10.244.0.24:39503 - 63590 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004774557s
	[INFO] 10.244.0.24:56970 - 38244 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000171536s
	[INFO] 10.244.0.24:45264 - 28219 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127926s
	[INFO] 10.244.0.24:51831 - 33091 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119392s
	[INFO] 10.244.0.24:45906 - 2493 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000205653s
	[INFO] 10.244.0.24:45936 - 14145 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002283767s
	[INFO] 10.244.0.24:60969 - 39360 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00218474s
	[INFO] 10.244.0.24:35870 - 38611 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001521653s
	[INFO] 10.244.0.24:48640 - 1430 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002231935s
	
	
	==> describe nodes <==
	Name:               addons-630724
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-630724
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf17d6b4040a54caaa170f92a048a513bb2a2b0d
	                    minikube.k8s.io/name=addons-630724
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_09T11_42_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-630724
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-630724"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Sep 2024 11:42:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-630724
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Sep 2024 11:48:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Sep 2024 11:45:57 +0000   Mon, 09 Sep 2024 11:42:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Sep 2024 11:45:57 +0000   Mon, 09 Sep 2024 11:42:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Sep 2024 11:45:57 +0000   Mon, 09 Sep 2024 11:42:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Sep 2024 11:45:57 +0000   Mon, 09 Sep 2024 11:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-630724
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 130cf7c327844c8191f904e90564b1b7
	  System UUID:                4b5ee642-66d0-4a05-8720-33700d75e3f5
	  Boot ID:                    7d6e1781-aee8-4484-a8de-8a5868c84ccd
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.21
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-ms8dm     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  gadget                      gadget-25vg2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  gcp-auth                    gcp-auth-89d5ffd79-vmb4s                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-zhr58    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m54s
	  kube-system                 coredns-6f6b679f8f-zj4kb                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 csi-hostpathplugin-s47vz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 etcd-addons-630724                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-xkh4c                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-630724                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-630724       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-5gj4z                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-630724                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 metrics-server-84c5f94fbc-mtk8x             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m56s
	  kube-system                 nvidia-device-plugin-daemonset-rdh99        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-6fb4cdfc84-mr9ck                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 registry-proxy-dm8pk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 snapshot-controller-56fcc65765-8542r        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-pf2x7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-zxx64     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  volcano-system              volcano-admission-77d7d48b68-zbqv2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-controllers-56675bb4d5-zg8pg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-scheduler-576bc46687-tn2vq          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-54qsf              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m                     kube-proxy       
	  Normal   NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node addons-630724 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m15s (x7 over 6m15s)  kubelet          Node addons-630724 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node addons-630724 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s                   kubelet          Node addons-630724 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-630724 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s                   kubelet          Node addons-630724 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                   node-controller  Node addons-630724 event: Registered Node addons-630724 in Controller
	
	
	==> dmesg <==
	[Sep 9 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015096] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.478389] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.837018] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.634111] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 9 10:32] hrtimer: interrupt took 14196427 ns
	[Sep 9 11:14] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [3094c2a2bfafbc38d9f5eafab0dca626003ede84f454774bcc0981067c0f9e26] <==
	{"level":"info","ts":"2024-09-09T11:42:48.791006Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-09T11:42:48.787858Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-09T11:42:48.791088Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-09T11:42:48.795255Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-09T11:42:48.795304Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-09T11:42:48.918158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-09T11:42:48.918382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-09T11:42:48.918484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-09T11:42:48.918654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-09T11:42:48.918735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-09T11:42:48.918827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-09T11:42:48.918905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-09T11:42:48.919905Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-630724 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-09T11:42:48.920068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-09T11:42:48.920469Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-09T11:42:48.921516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-09T11:42:48.922329Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-09T11:42:48.922680Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-09T11:42:48.922826Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-09T11:42:48.923338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-09T11:42:48.922361Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-09T11:42:48.930563Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-09T11:42:48.923828Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-09T11:42:48.930986Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-09T11:42:48.931111Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [be89f95d4432a945c4652c532de56d1398ba5cbda4ec24bcd4ce004c9fc26c6e] <==
	2024/09/09 11:45:42 GCP Auth Webhook started!
	2024/09/09 11:45:59 Ready to marshal response ...
	2024/09/09 11:45:59 Ready to write response ...
	2024/09/09 11:46:00 Ready to marshal response ...
	2024/09/09 11:46:00 Ready to write response ...
	
	
	==> kernel <==
	 11:49:02 up  1:31,  0 users,  load average: 0.30, 1.22, 2.02
	Linux addons-630724 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b7e50e771b064702269b0d4247f0d015e833b5b0f007064d5f0cac8da968bffd] <==
	I0909 11:46:54.312559       1 main.go:299] handling current node
	I0909 11:47:04.309645       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:47:04.309686       1 main.go:299] handling current node
	I0909 11:47:14.314843       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:47:14.314878       1 main.go:299] handling current node
	I0909 11:47:24.318843       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:47:24.318878       1 main.go:299] handling current node
	I0909 11:47:34.310308       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:47:34.310404       1 main.go:299] handling current node
	I0909 11:47:44.316653       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:47:44.316688       1 main.go:299] handling current node
	I0909 11:47:54.312168       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:47:54.312265       1 main.go:299] handling current node
	I0909 11:48:04.310541       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:48:04.310581       1 main.go:299] handling current node
	I0909 11:48:14.313864       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:48:14.313901       1 main.go:299] handling current node
	I0909 11:48:24.310738       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:48:24.310775       1 main.go:299] handling current node
	I0909 11:48:34.315839       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:48:34.315875       1 main.go:299] handling current node
	I0909 11:48:44.318036       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:48:44.318075       1 main.go:299] handling current node
	I0909 11:48:54.309690       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0909 11:48:54.309727       1 main.go:299] handling current node
	
	
	==> kube-apiserver [51b6d3c7a0c6d72a81af6f21db99bc978374708a69ea364661b7d7ad2b54e0f9] <==
	W0909 11:44:14.572844       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:15.672720       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:16.283431       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.122.255:443: connect: connection refused
	E0909 11:44:16.283484       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.122.255:443: connect: connection refused" logger="UnhandledError"
	W0909 11:44:16.285539       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:16.304866       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.122.255:443: connect: connection refused
	E0909 11:44:16.304921       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.122.255:443: connect: connection refused" logger="UnhandledError"
	W0909 11:44:16.306635       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:16.724645       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:17.760338       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:18.823741       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:19.922357       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:20.962004       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:21.999493       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:23.030643       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:24.059729       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:25.098825       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.110.71:443: connect: connection refused
	W0909 11:44:35.260796       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.122.255:443: connect: connection refused
	E0909 11:44:35.260904       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.122.255:443: connect: connection refused" logger="UnhandledError"
	W0909 11:45:16.294322       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.122.255:443: connect: connection refused
	E0909 11:45:16.294361       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.122.255:443: connect: connection refused" logger="UnhandledError"
	W0909 11:45:16.313528       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.122.255:443: connect: connection refused
	E0909 11:45:16.313574       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.122.255:443: connect: connection refused" logger="UnhandledError"
	I0909 11:45:59.876852       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0909 11:45:59.913733       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [1a42dda2b95537cbca0a26449012a2a3b9bea44ad07b0e3559886c91d5820d3b] <==
	I0909 11:45:16.322250       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0909 11:45:16.324272       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0909 11:45:16.340183       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0909 11:45:16.345851       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0909 11:45:16.356586       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0909 11:45:16.366100       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0909 11:45:16.375787       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0909 11:45:17.720272       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0909 11:45:17.733832       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0909 11:45:18.957932       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0909 11:45:19.007630       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0909 11:45:19.966283       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0909 11:45:19.976567       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0909 11:45:19.982009       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0909 11:45:20.014383       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0909 11:45:20.090263       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0909 11:45:20.119191       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0909 11:45:42.846079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="15.248978ms"
	I0909 11:45:42.846535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="53.464µs"
	I0909 11:45:49.025134       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0909 11:45:49.065814       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0909 11:45:50.009650       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0909 11:45:50.110709       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0909 11:45:57.767887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-630724"
	I0909 11:45:59.578729       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [49c6943beea95fe5f33705b54a5d87a5a11a9c8cdbdb7d359d7400ff19857718] <==
	I0909 11:43:01.581231       1 server_linux.go:66] "Using iptables proxy"
	I0909 11:43:01.679638       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0909 11:43:01.701524       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0909 11:43:01.764695       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0909 11:43:01.764763       1 server_linux.go:169] "Using iptables Proxier"
	I0909 11:43:01.766863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0909 11:43:01.767252       1 server.go:483] "Version info" version="v1.31.0"
	I0909 11:43:01.767273       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0909 11:43:01.774179       1 config.go:326] "Starting node config controller"
	I0909 11:43:01.774220       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0909 11:43:01.774747       1 config.go:197] "Starting service config controller"
	I0909 11:43:01.774769       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0909 11:43:01.774783       1 config.go:104] "Starting endpoint slice config controller"
	I0909 11:43:01.774788       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0909 11:43:01.874730       1 shared_informer.go:320] Caches are synced for node config
	I0909 11:43:01.874908       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0909 11:43:01.874957       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [36a5807fddd61ac2693288b01d648d83ef516dd1d01e208c1c5e902e79dc1bbc] <==
	E0909 11:42:52.301292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0909 11:42:52.301474       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0909 11:42:52.305202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0909 11:42:52.305225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0909 11:42:52.305251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0909 11:42:52.305266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0909 11:42:52.305280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0909 11:42:52.305296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0909 11:42:52.305309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0909 11:42:52.305322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0909 11:42:52.305350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0909 11:42:52.305379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0909 11:42:53.212721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0909 11:42:53.212954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0909 11:42:53.261769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0909 11:42:53.262010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0909 11:42:53.301982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0909 11:42:53.302235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0909 11:42:53.363799       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0909 11:42:53.364043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0909 11:42:53.445702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0909 11:42:53.445941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0909 11:42:53.485401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0909 11:42:53.485649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0909 11:42:53.862557       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 09 11:47:03 addons-630724 kubelet[1490]: E0909 11:47:03.753323    1490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-25vg2_gadget(c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837)\"" pod="gadget/gadget-25vg2" podUID="c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837"
	Sep 09 11:47:04 addons-630724 kubelet[1490]: I0909 11:47:04.753545    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rdh99" secret="" err="secret \"gcp-auth\" not found"
	Sep 09 11:47:17 addons-630724 kubelet[1490]: I0909 11:47:17.753228    1490 scope.go:117] "RemoveContainer" containerID="b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4"
	Sep 09 11:47:17 addons-630724 kubelet[1490]: E0909 11:47:17.753506    1490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-25vg2_gadget(c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837)\"" pod="gadget/gadget-25vg2" podUID="c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837"
	Sep 09 11:47:27 addons-630724 kubelet[1490]: I0909 11:47:27.753382    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dm8pk" secret="" err="secret \"gcp-auth\" not found"
	Sep 09 11:47:31 addons-630724 kubelet[1490]: I0909 11:47:31.752348    1490 scope.go:117] "RemoveContainer" containerID="b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4"
	Sep 09 11:47:31 addons-630724 kubelet[1490]: E0909 11:47:31.752568    1490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-25vg2_gadget(c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837)\"" pod="gadget/gadget-25vg2" podUID="c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837"
	Sep 09 11:47:32 addons-630724 kubelet[1490]: I0909 11:47:32.753844    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-mr9ck" secret="" err="secret \"gcp-auth\" not found"
	Sep 09 11:47:44 addons-630724 kubelet[1490]: I0909 11:47:44.754418    1490 scope.go:117] "RemoveContainer" containerID="b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4"
	Sep 09 11:47:44 addons-630724 kubelet[1490]: E0909 11:47:44.755053    1490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-25vg2_gadget(c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837)\"" pod="gadget/gadget-25vg2" podUID="c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837"
	Sep 09 11:47:56 addons-630724 kubelet[1490]: I0909 11:47:56.753456    1490 scope.go:117] "RemoveContainer" containerID="b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4"
	Sep 09 11:47:56 addons-630724 kubelet[1490]: E0909 11:47:56.753666    1490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-25vg2_gadget(c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837)\"" pod="gadget/gadget-25vg2" podUID="c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837"
	Sep 09 11:48:09 addons-630724 kubelet[1490]: I0909 11:48:09.752843    1490 scope.go:117] "RemoveContainer" containerID="b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4"
	Sep 09 11:48:09 addons-630724 kubelet[1490]: E0909 11:48:09.753800    1490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-25vg2_gadget(c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837)\"" pod="gadget/gadget-25vg2" podUID="c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837"
	Sep 09 11:48:20 addons-630724 kubelet[1490]: I0909 11:48:20.752808    1490 scope.go:117] "RemoveContainer" containerID="b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4"
	Sep 09 11:48:20 addons-630724 kubelet[1490]: E0909 11:48:20.753008    1490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-25vg2_gadget(c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837)\"" pod="gadget/gadget-25vg2" podUID="c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837"
	Sep 09 11:48:23 addons-630724 kubelet[1490]: I0909 11:48:23.752842    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rdh99" secret="" err="secret \"gcp-auth\" not found"
	Sep 09 11:48:33 addons-630724 kubelet[1490]: I0909 11:48:33.752857    1490 scope.go:117] "RemoveContainer" containerID="b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4"
	Sep 09 11:48:33 addons-630724 kubelet[1490]: E0909 11:48:33.753083    1490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-25vg2_gadget(c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837)\"" pod="gadget/gadget-25vg2" podUID="c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837"
	Sep 09 11:48:34 addons-630724 kubelet[1490]: I0909 11:48:34.753807    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dm8pk" secret="" err="secret \"gcp-auth\" not found"
	Sep 09 11:48:48 addons-630724 kubelet[1490]: I0909 11:48:48.753447    1490 scope.go:117] "RemoveContainer" containerID="b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4"
	Sep 09 11:48:48 addons-630724 kubelet[1490]: E0909 11:48:48.754094    1490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-25vg2_gadget(c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837)\"" pod="gadget/gadget-25vg2" podUID="c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837"
	Sep 09 11:48:59 addons-630724 kubelet[1490]: I0909 11:48:59.752982    1490 scope.go:117] "RemoveContainer" containerID="b2b4400ec48f6dfa07e51c937f306b77a015a2fe931b2ccaa4d5bb2077e372e4"
	Sep 09 11:48:59 addons-630724 kubelet[1490]: E0909 11:48:59.753215    1490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-25vg2_gadget(c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837)\"" pod="gadget/gadget-25vg2" podUID="c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837"
	Sep 09 11:49:00 addons-630724 kubelet[1490]: I0909 11:49:00.753588    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-mr9ck" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [7df1caab246cba1503b7b3f98c83d8d5243eaccd6559813510d84a53f3186c9a] <==
	I0909 11:43:06.100122       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0909 11:43:06.120555       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0909 11:43:06.120629       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0909 11:43:06.141211       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0909 11:43:06.143511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-630724_4e8161d1-fb6b-4efb-a25e-db5dd13e99df!
	I0909 11:43:06.144458       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d00c08e1-96be-4d1a-a9c3-6167005335d5", APIVersion:"v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-630724_4e8161d1-fb6b-4efb-a25e-db5dd13e99df became leader
	I0909 11:43:06.251853       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-630724_4e8161d1-fb6b-4efb-a25e-db5dd13e99df!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-630724 -n addons-630724
helpers_test.go:261: (dbg) Run:  kubectl --context addons-630724 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-p8lvw ingress-nginx-admission-patch-z4gbs test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-630724 describe pod ingress-nginx-admission-create-p8lvw ingress-nginx-admission-patch-z4gbs test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-630724 describe pod ingress-nginx-admission-create-p8lvw ingress-nginx-admission-patch-z4gbs test-job-nginx-0: exit status 1 (110.732202ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-p8lvw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z4gbs" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-630724 describe pod ingress-nginx-admission-create-p8lvw ingress-nginx-admission-patch-z4gbs test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-linux-arm64 license: exit status 40 (233.755135ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.23s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.17
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.0/json-events 6.21
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.23
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.16
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 217.62
31 TestAddons/serial/GCPAuth/Namespaces 0.19
33 TestAddons/parallel/Registry 16.12
34 TestAddons/parallel/Ingress 20.68
35 TestAddons/parallel/InspektorGadget 11.16
36 TestAddons/parallel/MetricsServer 5.79
39 TestAddons/parallel/CSI 43.24
40 TestAddons/parallel/Headlamp 17.13
41 TestAddons/parallel/CloudSpanner 6.83
42 TestAddons/parallel/LocalPath 51.99
43 TestAddons/parallel/NvidiaDevicePlugin 6.59
44 TestAddons/parallel/Yakd 12
45 TestAddons/StoppedEnableDisable 12.37
46 TestCertOptions 38.59
47 TestCertExpiration 232.39
49 TestForceSystemdFlag 32.6
50 TestForceSystemdEnv 42.05
51 TestDockerEnvContainerd 45.82
56 TestErrorSpam/setup 30.69
57 TestErrorSpam/start 0.97
58 TestErrorSpam/status 1.16
59 TestErrorSpam/pause 1.78
60 TestErrorSpam/unpause 1.85
61 TestErrorSpam/stop 1.48
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 51.34
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.41
68 TestFunctional/serial/KubeContext 0.09
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.13
73 TestFunctional/serial/CacheCmd/cache/add_local 1.24
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.31
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
81 TestFunctional/serial/ExtraConfig 42.66
82 TestFunctional/serial/ComponentHealth 0.16
83 TestFunctional/serial/LogsCmd 1.71
84 TestFunctional/serial/LogsFileCmd 1.71
85 TestFunctional/serial/InvalidService 3.82
87 TestFunctional/parallel/ConfigCmd 0.43
88 TestFunctional/parallel/DashboardCmd 9.13
89 TestFunctional/parallel/DryRun 0.46
90 TestFunctional/parallel/InternationalLanguage 0.19
91 TestFunctional/parallel/StatusCmd 1.03
95 TestFunctional/parallel/ServiceCmdConnect 12.7
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 28.62
99 TestFunctional/parallel/SSHCmd 0.7
100 TestFunctional/parallel/CpCmd 2.43
102 TestFunctional/parallel/FileSync 0.39
103 TestFunctional/parallel/CertSync 2.22
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.79
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.5
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
125 TestFunctional/parallel/ProfileCmd/profile_list 0.44
126 TestFunctional/parallel/ServiceCmd/List 0.6
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.69
129 TestFunctional/parallel/MountCmd/any-port 6.64
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
131 TestFunctional/parallel/ServiceCmd/Format 0.59
132 TestFunctional/parallel/ServiceCmd/URL 0.44
133 TestFunctional/parallel/MountCmd/specific-port 2.34
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.94
135 TestFunctional/parallel/Version/short 0.08
136 TestFunctional/parallel/Version/components 1.14
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
141 TestFunctional/parallel/ImageCommands/ImageBuild 4.03
142 TestFunctional/parallel/ImageCommands/Setup 0.68
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.44
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.61
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 116.29
160 TestMultiControlPlane/serial/DeployApp 32.74
161 TestMultiControlPlane/serial/PingHostFromPods 1.69
162 TestMultiControlPlane/serial/AddWorkerNode 24.1
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.74
165 TestMultiControlPlane/serial/CopyFile 19.26
166 TestMultiControlPlane/serial/StopSecondaryNode 12.89
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.59
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.52
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 123.56
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.79
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
173 TestMultiControlPlane/serial/StopCluster 36.15
174 TestMultiControlPlane/serial/RestartCluster 79.48
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.6
176 TestMultiControlPlane/serial/AddSecondaryNode 44.45
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
181 TestJSONOutput/start/Command 51.9
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.73
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.7
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.74
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 39.6
207 TestKicCustomNetwork/use_default_bridge_network 33.02
208 TestKicExistingNetwork 30.94
209 TestKicCustomSubnet 34.81
210 TestKicStaticIP 35.83
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 71.75
215 TestMountStart/serial/StartWithMountFirst 6.18
216 TestMountStart/serial/VerifyMountFirst 0.55
217 TestMountStart/serial/StartWithMountSecond 8.84
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.67
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.33
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 65.41
227 TestMultiNode/serial/DeployApp2Nodes 18
228 TestMultiNode/serial/PingHostFrom2Pods 1.01
229 TestMultiNode/serial/AddNode 17.36
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.36
232 TestMultiNode/serial/CopyFile 10.07
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 10.06
235 TestMultiNode/serial/RestartKeepsNodes 107.34
236 TestMultiNode/serial/DeleteNode 5.62
237 TestMultiNode/serial/StopMultiNode 24.05
238 TestMultiNode/serial/RestartMultiNode 47.24
239 TestMultiNode/serial/ValidateNameConflict 33.8
244 TestPreload 121.18
246 TestScheduledStopUnix 109.99
249 TestInsufficientStorage 10.79
250 TestRunningBinaryUpgrade 86.08
252 TestKubernetesUpgrade 352.52
253 TestMissingContainerUpgrade 193.8
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 37.94
257 TestNoKubernetes/serial/StartWithStopK8s 21.84
258 TestNoKubernetes/serial/Start 9.05
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
260 TestNoKubernetes/serial/ProfileList 1.15
261 TestNoKubernetes/serial/Stop 1.28
262 TestNoKubernetes/serial/StartNoArgs 7.41
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
264 TestStoppedBinaryUpgrade/Setup 0.77
265 TestStoppedBinaryUpgrade/Upgrade 110.54
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
275 TestPause/serial/Start 66.55
276 TestPause/serial/SecondStartNoReconfiguration 7.26
280 TestPause/serial/Pause 1.28
281 TestPause/serial/VerifyStatus 0.43
282 TestPause/serial/Unpause 0.91
287 TestNetworkPlugins/group/false 5.13
288 TestPause/serial/PauseAgain 1.17
289 TestPause/serial/DeletePaused 2.97
290 TestPause/serial/VerifyDeletedResources 0.15
295 TestStartStop/group/old-k8s-version/serial/FirstStart 149.13
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.89
298 TestStartStop/group/no-preload/serial/FirstStart 63.37
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.65
300 TestStartStop/group/old-k8s-version/serial/Stop 13.52
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
302 TestStartStop/group/old-k8s-version/serial/SecondStart 306.88
303 TestStartStop/group/no-preload/serial/DeployApp 9.5
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.66
305 TestStartStop/group/no-preload/serial/Stop 12.37
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/no-preload/serial/SecondStart 269.01
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
311 TestStartStop/group/old-k8s-version/serial/Pause 3.04
313 TestStartStop/group/embed-certs/serial/FirstStart 64.87
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.16
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
317 TestStartStop/group/no-preload/serial/Pause 4.23
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62
320 TestStartStop/group/embed-certs/serial/DeployApp 9.4
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
322 TestStartStop/group/embed-certs/serial/Stop 12.21
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
324 TestStartStop/group/embed-certs/serial/SecondStart 268.81
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.6
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.67
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.85
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 291.24
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
333 TestStartStop/group/embed-certs/serial/Pause 3.25
335 TestStartStop/group/newest-cni/serial/FirstStart 41.28
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.4
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
340 TestStartStop/group/newest-cni/serial/Stop 1.54
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
342 TestStartStop/group/newest-cni/serial/SecondStart 21.46
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.78
345 TestNetworkPlugins/group/auto/Start 73.33
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
349 TestStartStop/group/newest-cni/serial/Pause 3.71
350 TestNetworkPlugins/group/kindnet/Start 61.54
351 TestNetworkPlugins/group/auto/KubeletFlags 0.3
352 TestNetworkPlugins/group/auto/NetCatPod 10.3
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/auto/DNS 0.17
355 TestNetworkPlugins/group/auto/Localhost 0.15
356 TestNetworkPlugins/group/auto/HairPin 0.16
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.32
359 TestNetworkPlugins/group/kindnet/DNS 0.22
360 TestNetworkPlugins/group/kindnet/Localhost 0.23
361 TestNetworkPlugins/group/kindnet/HairPin 0.24
362 TestNetworkPlugins/group/calico/Start 76.79
363 TestNetworkPlugins/group/custom-flannel/Start 59.05
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.32
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.29
368 TestNetworkPlugins/group/calico/NetCatPod 10.31
369 TestNetworkPlugins/group/custom-flannel/DNS 0.22
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
372 TestNetworkPlugins/group/calico/DNS 0.29
373 TestNetworkPlugins/group/calico/Localhost 0.23
374 TestNetworkPlugins/group/calico/HairPin 0.22
375 TestNetworkPlugins/group/enable-default-cni/Start 75.99
376 TestNetworkPlugins/group/flannel/Start 58.47
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
379 TestNetworkPlugins/group/flannel/NetCatPod 9.46
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
382 TestNetworkPlugins/group/flannel/DNS 0.52
383 TestNetworkPlugins/group/flannel/Localhost 0.21
384 TestNetworkPlugins/group/flannel/HairPin 0.18
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
388 TestNetworkPlugins/group/bridge/Start 76.12
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
390 TestNetworkPlugins/group/bridge/NetCatPod 10.29
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.17
393 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.20.0/json-events (7.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-413866 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-413866 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.168093177s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-413866
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-413866: exit status 85 (100.297732ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-413866 | jenkins | v1.34.0 | 09 Sep 24 11:41 UTC |          |
	|         | -p download-only-413866        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/09 11:41:49
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0909 11:41:49.870500  298746 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:41:49.870672  298746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:41:49.870687  298746 out.go:358] Setting ErrFile to fd 2...
	I0909 11:41:49.870693  298746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:41:49.870974  298746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
	W0909 11:41:49.871133  298746 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19584-293351/.minikube/config/config.json: open /home/jenkins/minikube-integration/19584-293351/.minikube/config/config.json: no such file or directory
	I0909 11:41:49.871619  298746 out.go:352] Setting JSON to true
	I0909 11:41:49.872496  298746 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5048,"bootTime":1725877062,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0909 11:41:49.872572  298746 start.go:139] virtualization:  
	I0909 11:41:49.875925  298746 out.go:97] [download-only-413866] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0909 11:41:49.876073  298746 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19584-293351/.minikube/cache/preloaded-tarball: no such file or directory
	I0909 11:41:49.876158  298746 notify.go:220] Checking for updates...
	I0909 11:41:49.878081  298746 out.go:169] MINIKUBE_LOCATION=19584
	I0909 11:41:49.880485  298746 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 11:41:49.882481  298746 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	I0909 11:41:49.884226  298746 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	I0909 11:41:49.886117  298746 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0909 11:41:49.892019  298746 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0909 11:41:49.892391  298746 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 11:41:49.919648  298746 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 11:41:49.919766  298746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:41:49.976112  298746 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-09 11:41:49.965998611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 11:41:49.976266  298746 docker.go:307] overlay module found
	I0909 11:41:49.979387  298746 out.go:97] Using the docker driver based on user configuration
	I0909 11:41:49.979439  298746 start.go:297] selected driver: docker
	I0909 11:41:49.979447  298746 start.go:901] validating driver "docker" against <nil>
	I0909 11:41:49.979584  298746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:41:50.054388  298746 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-09 11:41:50.028624666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 11:41:50.054602  298746 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0909 11:41:50.054921  298746 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0909 11:41:50.059255  298746 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0909 11:41:50.062479  298746 out.go:169] Using Docker driver with root privileges
	I0909 11:41:50.066160  298746 cni.go:84] Creating CNI manager for ""
	I0909 11:41:50.066205  298746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0909 11:41:50.066220  298746 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0909 11:41:50.066397  298746 start.go:340] cluster config:
	{Name:download-only-413866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-413866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0909 11:41:50.068937  298746 out.go:97] Starting "download-only-413866" primary control-plane node in "download-only-413866" cluster
	I0909 11:41:50.068994  298746 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0909 11:41:50.071389  298746 out.go:97] Pulling base image v0.0.45 ...
	I0909 11:41:50.071436  298746 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0909 11:41:50.071668  298746 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0909 11:41:50.102430  298746 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0909 11:41:50.102628  298746 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0909 11:41:50.102750  298746 image.go:148] Writing gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0909 11:41:50.130570  298746 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0909 11:41:50.130603  298746 cache.go:56] Caching tarball of preloaded images
	I0909 11:41:50.130791  298746 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0909 11:41:50.133426  298746 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0909 11:41:50.133466  298746 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0909 11:41:50.217752  298746 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19584-293351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-413866 host does not exist
	  To start a cluster, run: "minikube start -p download-only-413866"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-413866
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-847638 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-847638 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.2071035s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-847638
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-847638: exit status 85 (74.092996ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-413866 | jenkins | v1.34.0 | 09 Sep 24 11:41 UTC |                     |
	|         | -p download-only-413866        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Sep 24 11:41 UTC | 09 Sep 24 11:41 UTC |
	| delete  | -p download-only-413866        | download-only-413866 | jenkins | v1.34.0 | 09 Sep 24 11:41 UTC | 09 Sep 24 11:41 UTC |
	| start   | -o=json --download-only        | download-only-847638 | jenkins | v1.34.0 | 09 Sep 24 11:41 UTC |                     |
	|         | -p download-only-847638        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/09 11:41:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0909 11:41:57.488767  298950 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:41:57.488954  298950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:41:57.488984  298950 out.go:358] Setting ErrFile to fd 2...
	I0909 11:41:57.489004  298950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:41:57.489373  298950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
	I0909 11:41:57.489894  298950 out.go:352] Setting JSON to true
	I0909 11:41:57.490859  298950 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5056,"bootTime":1725877062,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0909 11:41:57.490935  298950 start.go:139] virtualization:  
	I0909 11:41:57.494448  298950 out.go:97] [download-only-847638] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0909 11:41:57.494788  298950 notify.go:220] Checking for updates...
	I0909 11:41:57.497113  298950 out.go:169] MINIKUBE_LOCATION=19584
	I0909 11:41:57.499825  298950 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 11:41:57.502788  298950 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	I0909 11:41:57.504904  298950 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	I0909 11:41:57.507172  298950 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0909 11:41:57.511954  298950 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0909 11:41:57.512277  298950 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 11:41:57.540442  298950 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 11:41:57.540570  298950 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:41:57.599192  298950 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-09 11:41:57.589563584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 11:41:57.599315  298950 docker.go:307] overlay module found
	I0909 11:41:57.601747  298950 out.go:97] Using the docker driver based on user configuration
	I0909 11:41:57.601781  298950 start.go:297] selected driver: docker
	I0909 11:41:57.601789  298950 start.go:901] validating driver "docker" against <nil>
	I0909 11:41:57.601906  298950 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:41:57.654635  298950 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-09 11:41:57.644851994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 11:41:57.654812  298950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0909 11:41:57.655108  298950 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0909 11:41:57.655320  298950 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0909 11:41:57.658246  298950 out.go:169] Using Docker driver with root privileges
	I0909 11:41:57.660807  298950 cni.go:84] Creating CNI manager for ""
	I0909 11:41:57.660833  298950 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0909 11:41:57.660850  298950 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0909 11:41:57.660946  298950 start.go:340] cluster config:
	{Name:download-only-847638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-847638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0909 11:41:57.663900  298950 out.go:97] Starting "download-only-847638" primary control-plane node in "download-only-847638" cluster
	I0909 11:41:57.663949  298950 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0909 11:41:57.666567  298950 out.go:97] Pulling base image v0.0.45 ...
	I0909 11:41:57.666611  298950 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0909 11:41:57.666797  298950 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0909 11:41:57.682348  298950 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0909 11:41:57.682460  298950 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0909 11:41:57.682486  298950 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0909 11:41:57.682498  298950 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0909 11:41:57.682506  298950 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0909 11:41:57.721704  298950 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0909 11:41:57.721731  298950 cache.go:56] Caching tarball of preloaded images
	I0909 11:41:57.721890  298950 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0909 11:41:57.724691  298950 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0909 11:41:57.724721  298950 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0909 11:41:57.816396  298950 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19584-293351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0909 11:42:02.121787  298950 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0909 11:42:02.121899  298950 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19584-293351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-847638 host does not exist
	  To start a cluster, run: "minikube start -p download-only-847638"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-847638
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-053699 --alsologtostderr --binary-mirror http://127.0.0.1:41591 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-053699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-053699
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-630724
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-630724: exit status 85 (70.148863ms)

                                                
                                                
-- stdout --
	* Profile "addons-630724" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-630724"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-630724
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-630724: exit status 85 (90.360152ms)

                                                
                                                
-- stdout --
	* Profile "addons-630724" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-630724"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (217.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-630724 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-630724 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m37.614569077s)
--- PASS: TestAddons/Setup (217.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-630724 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-630724 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.145196ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-mr9ck" [e7dd8cff-56cc-4632-a210-f2a55ade65eb] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009544848s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dm8pk" [a039b26f-4cfc-480f-9f4f-bf39b72b5d47] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004616036s
addons_test.go:342: (dbg) Run:  kubectl --context addons-630724 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-630724 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-630724 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.099175008s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 ip
2024/09/09 11:49:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-630724 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-630724 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-630724 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [57bc3928-3b4c-4563-a6a7-91cc96ab5e3f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [57bc3928-3b4c-4563-a6a7-91cc96ab5e3f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003776298s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-630724 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-630724 addons disable ingress-dns --alsologtostderr -v=1: (1.809090717s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-630724 addons disable ingress --alsologtostderr -v=1: (8.009167132s)
--- PASS: TestAddons/parallel/Ingress (20.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.16s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-25vg2" [c94aaf2a-00d7-4b99-8aaf-2a73cfa7e837] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004688583s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-630724
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-630724: (6.156347927s)
--- PASS: TestAddons/parallel/InspektorGadget (11.16s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.308685ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-mtk8x" [d3a4f297-8b21-4bf6-b16d-87007ad009c9] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004865347s
addons_test.go:417: (dbg) Run:  kubectl --context addons-630724 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.781085ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-630724 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-630724 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ed6c5286-851b-426c-a915-8ed236bbf50f] Pending
helpers_test.go:344: "task-pv-pod" [ed6c5286-851b-426c-a915-8ed236bbf50f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ed6c5286-851b-426c-a915-8ed236bbf50f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003218373s
addons_test.go:590: (dbg) Run:  kubectl --context addons-630724 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-630724 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-630724 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-630724 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-630724 delete pod task-pv-pod: (1.127909237s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-630724 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-630724 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-630724 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8fa54de6-a5f4-406e-ab3e-59643ea44022] Pending
helpers_test.go:344: "task-pv-pod-restore" [8fa54de6-a5f4-406e-ab3e-59643ea44022] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8fa54de6-a5f4-406e-ab3e-59643ea44022] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003523247s
addons_test.go:632: (dbg) Run:  kubectl --context addons-630724 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-630724 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-630724 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-630724 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.831476446s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-630724 addons disable volumesnapshots --alsologtostderr -v=1: (1.168265655s)
--- PASS: TestAddons/parallel/CSI (43.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-630724 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-630724 --alsologtostderr -v=1: (1.276954892s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-8xtlw" [6b24977c-e6af-476d-90e4-dd5c12593cda] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-8xtlw" [6b24977c-e6af-476d-90e4-dd5c12593cda] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-8xtlw" [6b24977c-e6af-476d-90e4-dd5c12593cda] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-8xtlw" [6b24977c-e6af-476d-90e4-dd5c12593cda] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004404806s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-630724 addons disable headlamp --alsologtostderr -v=1: (5.847032649s)
--- PASS: TestAddons/parallel/Headlamp (17.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-ms8dm" [f8a461a2-3bdd-4a20-8c10-1b8b3bfd201d] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003514455s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-630724
--- PASS: TestAddons/parallel/CloudSpanner (6.83s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-630724 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-630724 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630724 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d848279b-f786-4b4b-aa33-c3974bc00c6c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d848279b-f786-4b4b-aa33-c3974bc00c6c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d848279b-f786-4b4b-aa33-c3974bc00c6c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004380549s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-630724 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 ssh "cat /opt/local-path-provisioner/pvc-f5e7a2ce-2478-4dde-9017-0573a2ee5f3b_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-630724 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-630724 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-630724 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.755397989s)
--- PASS: TestAddons/parallel/LocalPath (51.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rdh99" [2c40ce95-e2f5-4194-a39f-80ddedabf707] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004246316s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-630724
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-54qsf" [e1355bf2-53f0-430a-b0ae-065b776a963f] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004069159s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-630724 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-630724 addons disable yakd --alsologtostderr -v=1: (5.999927193s)
--- PASS: TestAddons/parallel/Yakd (12.00s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-630724
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-630724: (12.103214204s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-630724
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-630724
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-630724
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (38.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-095349 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-095349 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.766772831s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-095349 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-095349 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-095349 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-095349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-095349
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-095349: (2.05823625s)
--- PASS: TestCertOptions (38.59s)

                                                
                                    
x
+
TestCertExpiration (232.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-915123 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-915123 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.499317345s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-915123 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-915123 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.505118996s)
helpers_test.go:175: Cleaning up "cert-expiration-915123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-915123
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-915123: (2.38259611s)
--- PASS: TestCertExpiration (232.39s)

                                                
                                    
x
+
TestForceSystemdFlag (32.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-648072 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-648072 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.765166932s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-648072 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-648072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-648072
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-648072: (2.495137139s)
--- PASS: TestForceSystemdFlag (32.60s)

                                                
                                    
x
+
TestForceSystemdEnv (42.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-315028 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-315028 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.209314195s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-315028 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-315028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-315028
E0909 12:28:46.449521  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-315028: (2.396065871s)
--- PASS: TestForceSystemdEnv (42.05s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.82s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-503821 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-503821 --driver=docker  --container-runtime=containerd: (30.181894613s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-503821"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Sn4cHvzw3wvO/agent.317836" SSH_AGENT_PID="317837" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Sn4cHvzw3wvO/agent.317836" SSH_AGENT_PID="317837" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Sn4cHvzw3wvO/agent.317836" SSH_AGENT_PID="317837" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.181993982s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Sn4cHvzw3wvO/agent.317836" SSH_AGENT_PID="317837" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Sn4cHvzw3wvO/agent.317836" SSH_AGENT_PID="317837" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls": (1.008926755s)
helpers_test.go:175: Cleaning up "dockerenv-503821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-503821
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-503821: (2.010869515s)
--- PASS: TestDockerEnvContainerd (45.82s)

                                                
                                    
x
+
TestErrorSpam/setup (30.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-134322 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-134322 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-134322 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-134322 --driver=docker  --container-runtime=containerd: (30.690582559s)
--- PASS: TestErrorSpam/setup (30.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 start --dry-run
--- PASS: TestErrorSpam/start (0.97s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 stop: (1.288520704s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-134322 --log_dir /tmp/nospam-134322 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19584-293351/.minikube/files/etc/test/nested/copy/298741/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-649830 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-649830 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.340520172s)
--- PASS: TestFunctional/serial/StartWithProxy (51.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-649830 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-649830 --alsologtostderr -v=8: (6.405832579s)
functional_test.go:663: soft start took 6.411289948s for "functional-649830" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-649830 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 cache add registry.k8s.io/pause:3.1: (1.498509069s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 cache add registry.k8s.io/pause:3.3: (1.424667973s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 cache add registry.k8s.io/pause:latest: (1.204734066s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-649830 /tmp/TestFunctionalserialCacheCmdcacheadd_local2507460045/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 cache add minikube-local-cache-test:functional-649830
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 cache delete minikube-local-cache-test:functional-649830
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-649830
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-649830 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.727092ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 cache reload: (1.135689394s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 kubectl -- --context functional-649830 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-649830 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.66s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-649830 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-649830 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.66258662s)
functional_test.go:761: restart took 42.662729119s for "functional-649830" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.66s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-649830 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 logs: (1.714054677s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 logs --file /tmp/TestFunctionalserialLogsFileCmd3656227943/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 logs --file /tmp/TestFunctionalserialLogsFileCmd3656227943/001/logs.txt: (1.709444969s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-649830 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-649830
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-649830: exit status 115 (464.980122ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31046 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-649830 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-649830 config get cpus: exit status 14 (68.450537ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-649830 config get cpus: exit status 14 (71.037669ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-649830 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-649830 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 332484: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-649830 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-649830 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (178.437007ms)

                                                
                                                
-- stdout --
	* [functional-649830] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 11:55:28.372278  332148 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:55:28.372451  332148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:55:28.372462  332148 out.go:358] Setting ErrFile to fd 2...
	I0909 11:55:28.372467  332148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:55:28.372714  332148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
	I0909 11:55:28.373069  332148 out.go:352] Setting JSON to false
	I0909 11:55:28.374193  332148 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5867,"bootTime":1725877062,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0909 11:55:28.374269  332148 start.go:139] virtualization:  
	I0909 11:55:28.377445  332148 out.go:177] * [functional-649830] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0909 11:55:28.379686  332148 out.go:177]   - MINIKUBE_LOCATION=19584
	I0909 11:55:28.379747  332148 notify.go:220] Checking for updates...
	I0909 11:55:28.383792  332148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 11:55:28.386260  332148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	I0909 11:55:28.388612  332148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	I0909 11:55:28.391172  332148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0909 11:55:28.393309  332148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0909 11:55:28.396495  332148 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0909 11:55:28.397109  332148 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 11:55:28.428792  332148 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 11:55:28.428903  332148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:55:28.484817  332148 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-09 11:55:28.474757236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 11:55:28.484936  332148 docker.go:307] overlay module found
	I0909 11:55:28.488411  332148 out.go:177] * Using the docker driver based on existing profile
	I0909 11:55:28.490399  332148 start.go:297] selected driver: docker
	I0909 11:55:28.490429  332148 start.go:901] validating driver "docker" against &{Name:functional-649830 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-649830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0909 11:55:28.490575  332148 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0909 11:55:28.493486  332148 out.go:201] 
	W0909 11:55:28.495500  332148 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0909 11:55:28.497731  332148 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-649830 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-649830 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-649830 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (194.418008ms)

                                                
                                                
-- stdout --
	* [functional-649830] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 11:55:28.194404  332102 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:55:28.194578  332102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:55:28.194588  332102 out.go:358] Setting ErrFile to fd 2...
	I0909 11:55:28.194595  332102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:55:28.194996  332102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
	I0909 11:55:28.195503  332102 out.go:352] Setting JSON to false
	I0909 11:55:28.196513  332102 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5867,"bootTime":1725877062,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0909 11:55:28.196601  332102 start.go:139] virtualization:  
	I0909 11:55:28.199792  332102 out.go:177] * [functional-649830] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0909 11:55:28.202732  332102 out.go:177]   - MINIKUBE_LOCATION=19584
	I0909 11:55:28.202919  332102 notify.go:220] Checking for updates...
	I0909 11:55:28.206796  332102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 11:55:28.209075  332102 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	I0909 11:55:28.214226  332102 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	I0909 11:55:28.216278  332102 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0909 11:55:28.218290  332102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0909 11:55:28.220520  332102 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0909 11:55:28.221094  332102 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 11:55:28.246909  332102 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 11:55:28.247037  332102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:55:28.307167  332102 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-09 11:55:28.296885186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 11:55:28.307291  332102 docker.go:307] overlay module found
	I0909 11:55:28.310956  332102 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0909 11:55:28.312745  332102 start.go:297] selected driver: docker
	I0909 11:55:28.312760  332102 start.go:901] validating driver "docker" against &{Name:functional-649830 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-649830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0909 11:55:28.312885  332102 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0909 11:55:28.315170  332102 out.go:201] 
	W0909 11:55:28.317082  332102 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0909 11:55:28.319354  332102 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-649830 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-649830 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-g724n" [c1962ff4-1bfd-4d68-b9f0-0ea6b876b957] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-g724n" [c1962ff4-1bfd-4d68-b9f0-0ea6b876b957] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004401031s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30181
functional_test.go:1675: http://192.168.49.2:30181: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-g724n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30181
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [89791e89-5ba4-4ff6-b267-f5c835762c95] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.030939095s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-649830 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-649830 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-649830 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-649830 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-649830 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1c2755f8-63d4-47c1-b854-0943f3ffe253] Pending
helpers_test.go:344: "sp-pod" [1c2755f8-63d4-47c1-b854-0943f3ffe253] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1c2755f8-63d4-47c1-b854-0943f3ffe253] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004406207s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-649830 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-649830 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-649830 delete -f testdata/storage-provisioner/pod.yaml: (1.015340802s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-649830 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [97c7d5c2-9414-4971-9ac4-2afda6530544] Pending
helpers_test.go:344: "sp-pod" [97c7d5c2-9414-4971-9ac4-2afda6530544] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [97c7d5c2-9414-4971-9ac4-2afda6530544] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003674323s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-649830 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh -n functional-649830 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 cp functional-649830:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1878031352/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh -n functional-649830 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh -n functional-649830 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/298741/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo cat /etc/test/nested/copy/298741/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/298741.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo cat /etc/ssl/certs/298741.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/298741.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo cat /usr/share/ca-certificates/298741.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2987412.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo cat /etc/ssl/certs/2987412.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2987412.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo cat /usr/share/ca-certificates/2987412.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-649830 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-649830 ssh "sudo systemctl is-active docker": exit status 1 (384.392461ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-649830 ssh "sudo systemctl is-active crio": exit status 1 (407.017618ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-649830 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-649830 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-649830 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-649830 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 329801: os: process already finished
helpers_test.go:502: unable to terminate pid 329626: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-649830 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-649830 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c12b6637-83fe-494f-8ff7-5f4e3f3632f8] Pending
helpers_test.go:344: "nginx-svc" [c12b6637-83fe-494f-8ff7-5f4e3f3632f8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c12b6637-83fe-494f-8ff7-5f4e3f3632f8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004024905s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-649830 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.122.95 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-649830 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-649830 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-649830 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-2xgh4" [efbba889-a95a-48c9-9e84-02718d5489b0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-2xgh4" [efbba889-a95a-48c9-9e84-02718d5489b0] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005656832s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "373.003317ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "64.790677ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "386.044193ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "163.830202ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 service list -o json
functional_test.go:1494: Took "684.981627ms" to run "out/minikube-linux-arm64 -p functional-649830 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdany-port2768373917/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725882925147710600" to /tmp/TestFunctionalparallelMountCmdany-port2768373917/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725882925147710600" to /tmp/TestFunctionalparallelMountCmdany-port2768373917/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725882925147710600" to /tmp/TestFunctionalparallelMountCmdany-port2768373917/001/test-1725882925147710600
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  9 11:55 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  9 11:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  9 11:55 test-1725882925147710600
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh cat /mount-9p/test-1725882925147710600
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-649830 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [46a51747-896f-4220-b062-f67888142e95] Pending
helpers_test.go:344: "busybox-mount" [46a51747-896f-4220-b062-f67888142e95] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [46a51747-896f-4220-b062-f67888142e95] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [46a51747-896f-4220-b062-f67888142e95] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.011671862s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-649830 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdany-port2768373917/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31729
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31729
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdspecific-port1730586213/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-649830 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (437.769007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdspecific-port1730586213/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-649830 ssh "sudo umount -f /mount-9p": exit status 1 (328.609497ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-649830 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdspecific-port1730586213/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2294394650/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2294394650/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2294394650/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 ssh "findmnt -T" /mount1: (1.181336586s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-649830 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2294394650/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2294394650/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-649830 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2294394650/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 version -o=json --components: (1.144007281s)
--- PASS: TestFunctional/parallel/Version/components (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-649830 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-649830
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-649830
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-649830 image ls --format short --alsologtostderr:
I0909 11:55:44.233233  334977 out.go:345] Setting OutFile to fd 1 ...
I0909 11:55:44.233409  334977 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:55:44.233419  334977 out.go:358] Setting ErrFile to fd 2...
I0909 11:55:44.233424  334977 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:55:44.233692  334977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
I0909 11:55:44.234353  334977 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0909 11:55:44.234474  334977 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0909 11:55:44.234973  334977 cli_runner.go:164] Run: docker container inspect functional-649830 --format={{.State.Status}}
I0909 11:55:44.253912  334977 ssh_runner.go:195] Run: systemctl --version
I0909 11:55:44.253970  334977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-649830
I0909 11:55:44.280910  334977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/functional-649830/id_rsa Username:docker}
I0909 11:55:44.369835  334977 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image ls --format table --alsologtostderr
E0909 11:55:44.669043  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-649830 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| docker.io/library/minikube-local-cache-test | functional-649830  | sha256:59513c | 992B   |
| docker.io/kicbase/echo-server               | functional-649830  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-649830 image ls --format table --alsologtostderr:
I0909 11:55:44.539813  335045 out.go:345] Setting OutFile to fd 1 ...
I0909 11:55:44.540044  335045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:55:44.540083  335045 out.go:358] Setting ErrFile to fd 2...
I0909 11:55:44.540115  335045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:55:44.540387  335045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
I0909 11:55:44.541160  335045 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0909 11:55:44.541360  335045 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0909 11:55:44.541879  335045 cli_runner.go:164] Run: docker container inspect functional-649830 --format={{.State.Status}}
I0909 11:55:44.561278  335045 ssh_runner.go:195] Run: systemctl --version
I0909 11:55:44.561328  335045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-649830
I0909 11:55:44.588337  335045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/functional-649830/id_rsa Username:docker}
I0909 11:55:44.700313  335045 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-649830 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:59513c224367ffb30754224b6c7eadbc0f40d501d7ccf07e71c077fd2b3b28c8","repoDigest
s":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-649830"],"size":"992"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id"
:"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d2
8319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-649830"],"size":"2173567"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af
7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-649830 image ls --format json --alsologtostderr:
I0909 11:55:44.501807  335040 out.go:345] Setting OutFile to fd 1 ...
I0909 11:55:44.501990  335040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:55:44.502001  335040 out.go:358] Setting ErrFile to fd 2...
I0909 11:55:44.502007  335040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:55:44.502261  335040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
I0909 11:55:44.502882  335040 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0909 11:55:44.503003  335040 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0909 11:55:44.503498  335040 cli_runner.go:164] Run: docker container inspect functional-649830 --format={{.State.Status}}
I0909 11:55:44.520834  335040 ssh_runner.go:195] Run: systemctl --version
I0909 11:55:44.520897  335040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-649830
I0909 11:55:44.553741  335040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/functional-649830/id_rsa Username:docker}
I0909 11:55:44.651758  335040 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-649830 image ls --format yaml --alsologtostderr:
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:59513c224367ffb30754224b6c7eadbc0f40d501d7ccf07e71c077fd2b3b28c8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-649830
size: "992"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-649830
size: "2173567"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-649830 image ls --format yaml --alsologtostderr:
I0909 11:55:44.246340  334978 out.go:345] Setting OutFile to fd 1 ...
I0909 11:55:44.247551  334978 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:55:44.247589  334978 out.go:358] Setting ErrFile to fd 2...
I0909 11:55:44.247608  334978 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:55:44.247895  334978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
I0909 11:55:44.248587  334978 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0909 11:55:44.248788  334978 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0909 11:55:44.249327  334978 cli_runner.go:164] Run: docker container inspect functional-649830 --format={{.State.Status}}
I0909 11:55:44.273562  334978 ssh_runner.go:195] Run: systemctl --version
I0909 11:55:44.273615  334978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-649830
I0909 11:55:44.300016  334978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/functional-649830/id_rsa Username:docker}
I0909 11:55:44.386285  334978 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-649830 ssh pgrep buildkitd: exit status 1 (262.607242ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image build -t localhost/my-image:functional-649830 testdata/build --alsologtostderr
E0909 11:55:45.950563  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:55:48.512582  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 image build -t localhost/my-image:functional-649830 testdata/build --alsologtostderr: (3.550904431s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-649830 image build -t localhost/my-image:functional-649830 testdata/build --alsologtostderr:
I0909 11:55:45.015478  335160 out.go:345] Setting OutFile to fd 1 ...
I0909 11:55:45.016522  335160 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:55:45.016537  335160 out.go:358] Setting ErrFile to fd 2...
I0909 11:55:45.016543  335160 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:55:45.016847  335160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
I0909 11:55:45.017527  335160 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0909 11:55:45.018303  335160 config.go:182] Loaded profile config "functional-649830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0909 11:55:45.018867  335160 cli_runner.go:164] Run: docker container inspect functional-649830 --format={{.State.Status}}
I0909 11:55:45.066742  335160 ssh_runner.go:195] Run: systemctl --version
I0909 11:55:45.066834  335160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-649830
I0909 11:55:45.168579  335160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/functional-649830/id_rsa Username:docker}
I0909 11:55:45.290694  335160 build_images.go:161] Building image from path: /tmp/build.2219918763.tar
I0909 11:55:45.290820  335160 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0909 11:55:45.304269  335160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2219918763.tar
I0909 11:55:45.310023  335160 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2219918763.tar: stat -c "%s %y" /var/lib/minikube/build/build.2219918763.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2219918763.tar': No such file or directory
I0909 11:55:45.310068  335160 ssh_runner.go:362] scp /tmp/build.2219918763.tar --> /var/lib/minikube/build/build.2219918763.tar (3072 bytes)
I0909 11:55:45.349176  335160 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2219918763
I0909 11:55:45.363416  335160 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2219918763 -xf /var/lib/minikube/build/build.2219918763.tar
I0909 11:55:45.376654  335160 containerd.go:394] Building image: /var/lib/minikube/build/build.2219918763
I0909 11:55:45.376739  335160 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2219918763 --local dockerfile=/var/lib/minikube/build/build.2219918763 --output type=image,name=localhost/my-image:functional-649830
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:f77cddaef00d75cbc4911c244d65899cdf719cb4498a3adfc859a119db0c07ce
#8 exporting manifest sha256:f77cddaef00d75cbc4911c244d65899cdf719cb4498a3adfc859a119db0c07ce 0.0s done
#8 exporting config sha256:1b5019b360b41cd7895e20251bf2ae6d5135f0250bc0b8feab4e9bd79dfedadd 0.0s done
#8 naming to localhost/my-image:functional-649830 done
#8 DONE 0.1s
I0909 11:55:48.491515  335160 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2219918763 --local dockerfile=/var/lib/minikube/build/build.2219918763 --output type=image,name=localhost/my-image:functional-649830: (3.114749432s)
I0909 11:55:48.491593  335160 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2219918763
I0909 11:55:48.501263  335160 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2219918763.tar
I0909 11:55:48.510316  335160 build_images.go:217] Built localhost/my-image:functional-649830 from /tmp/build.2219918763.tar
I0909 11:55:48.510348  335160 build_images.go:133] succeeded building to: functional-649830
I0909 11:55:48.510353  335160 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/09/09 11:55:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-649830
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image load --daemon kicbase/echo-server:functional-649830 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 image load --daemon kicbase/echo-server:functional-649830 --alsologtostderr: (1.134003104s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image load --daemon kicbase/echo-server:functional-649830 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-649830 image load --daemon kicbase/echo-server:functional-649830 --alsologtostderr: (1.109775416s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-649830
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image load --daemon kicbase/echo-server:functional-649830 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image save kicbase/echo-server:functional-649830 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image rm kicbase/echo-server:functional-649830 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
E0909 11:55:43.373705  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:55:43.385165  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:55:43.396508  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:55:43.418260  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:55:43.459656  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image ls
E0909 11:55:43.541033  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:55:43.702582  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-649830
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-649830 image save --daemon kicbase/echo-server:functional-649830 --alsologtostderr
E0909 11:55:44.026876  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-649830
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-649830
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-649830
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-649830
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-036926 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0909 11:55:53.634840  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:56:03.876164  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:56:24.358284  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:57:05.320235  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-036926 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m55.450235829s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (116.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-036926 -- rollout status deployment/busybox: (29.749848132s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-kcmd9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-lqvln -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-zcv8b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-kcmd9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-lqvln -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-zcv8b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-kcmd9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-lqvln -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-zcv8b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-kcmd9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-kcmd9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-lqvln -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-lqvln -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-zcv8b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-036926 -- exec busybox-7dff88458-zcv8b -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-036926 -v=7 --alsologtostderr
E0909 11:58:27.241816  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-036926 -v=7 --alsologtostderr: (23.092047998s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr: (1.011247739s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-036926 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp testdata/cp-test.txt ha-036926:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile948157417/001/cp-test_ha-036926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926:/home/docker/cp-test.txt ha-036926-m02:/home/docker/cp-test_ha-036926_ha-036926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m02 "sudo cat /home/docker/cp-test_ha-036926_ha-036926-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926:/home/docker/cp-test.txt ha-036926-m03:/home/docker/cp-test_ha-036926_ha-036926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m03 "sudo cat /home/docker/cp-test_ha-036926_ha-036926-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926:/home/docker/cp-test.txt ha-036926-m04:/home/docker/cp-test_ha-036926_ha-036926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m04 "sudo cat /home/docker/cp-test_ha-036926_ha-036926-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp testdata/cp-test.txt ha-036926-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile948157417/001/cp-test_ha-036926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m02:/home/docker/cp-test.txt ha-036926:/home/docker/cp-test_ha-036926-m02_ha-036926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926 "sudo cat /home/docker/cp-test_ha-036926-m02_ha-036926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m02:/home/docker/cp-test.txt ha-036926-m03:/home/docker/cp-test_ha-036926-m02_ha-036926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m03 "sudo cat /home/docker/cp-test_ha-036926-m02_ha-036926-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m02:/home/docker/cp-test.txt ha-036926-m04:/home/docker/cp-test_ha-036926-m02_ha-036926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m04 "sudo cat /home/docker/cp-test_ha-036926-m02_ha-036926-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp testdata/cp-test.txt ha-036926-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile948157417/001/cp-test_ha-036926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m03:/home/docker/cp-test.txt ha-036926:/home/docker/cp-test_ha-036926-m03_ha-036926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926 "sudo cat /home/docker/cp-test_ha-036926-m03_ha-036926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m03:/home/docker/cp-test.txt ha-036926-m02:/home/docker/cp-test_ha-036926-m03_ha-036926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m02 "sudo cat /home/docker/cp-test_ha-036926-m03_ha-036926-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m03:/home/docker/cp-test.txt ha-036926-m04:/home/docker/cp-test_ha-036926-m03_ha-036926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m04 "sudo cat /home/docker/cp-test_ha-036926-m03_ha-036926-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp testdata/cp-test.txt ha-036926-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile948157417/001/cp-test_ha-036926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m04:/home/docker/cp-test.txt ha-036926:/home/docker/cp-test_ha-036926-m04_ha-036926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926 "sudo cat /home/docker/cp-test_ha-036926-m04_ha-036926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m04:/home/docker/cp-test.txt ha-036926-m02:/home/docker/cp-test_ha-036926-m04_ha-036926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m02 "sudo cat /home/docker/cp-test_ha-036926-m04_ha-036926-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 cp ha-036926-m04:/home/docker/cp-test.txt ha-036926-m03:/home/docker/cp-test_ha-036926-m04_ha-036926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 ssh -n ha-036926-m03 "sudo cat /home/docker/cp-test_ha-036926-m04_ha-036926-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-036926 node stop m02 -v=7 --alsologtostderr: (12.172997729s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr: exit status 7 (719.253366ms)

                                                
                                                
-- stdout --
	ha-036926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-036926-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-036926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-036926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 11:59:18.801710  351344 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:59:18.801825  351344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:59:18.801836  351344 out.go:358] Setting ErrFile to fd 2...
	I0909 11:59:18.801841  351344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:59:18.802087  351344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
	I0909 11:59:18.802271  351344 out.go:352] Setting JSON to false
	I0909 11:59:18.802310  351344 mustload.go:65] Loading cluster: ha-036926
	I0909 11:59:18.802366  351344 notify.go:220] Checking for updates...
	I0909 11:59:18.802801  351344 config.go:182] Loaded profile config "ha-036926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0909 11:59:18.802815  351344 status.go:255] checking status of ha-036926 ...
	I0909 11:59:18.803616  351344 cli_runner.go:164] Run: docker container inspect ha-036926 --format={{.State.Status}}
	I0909 11:59:18.821853  351344 status.go:330] ha-036926 host status = "Running" (err=<nil>)
	I0909 11:59:18.821883  351344 host.go:66] Checking if "ha-036926" exists ...
	I0909 11:59:18.822203  351344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-036926
	I0909 11:59:18.846431  351344 host.go:66] Checking if "ha-036926" exists ...
	I0909 11:59:18.846735  351344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 11:59:18.846779  351344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-036926
	I0909 11:59:18.870104  351344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/ha-036926/id_rsa Username:docker}
	I0909 11:59:18.958517  351344 ssh_runner.go:195] Run: systemctl --version
	I0909 11:59:18.962602  351344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 11:59:18.974213  351344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:59:19.036643  351344 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-09 11:59:19.020009625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 11:59:19.042221  351344 kubeconfig.go:125] found "ha-036926" server: "https://192.168.49.254:8443"
	I0909 11:59:19.042267  351344 api_server.go:166] Checking apiserver status ...
	I0909 11:59:19.042920  351344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0909 11:59:19.060621  351344 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	I0909 11:59:19.072372  351344 api_server.go:182] apiserver freezer: "9:freezer:/docker/ef2cb49999016361944f43b35726e7958cdbb5c053b13468576fb0310a4443b9/kubepods/burstable/pod0dc6cd7ea41134ea82a981b863ebbe4f/84fe69b4298a3ba9135f58b5a25051c2a09cc552fb0297544dcbf4904ca4deb2"
	I0909 11:59:19.072467  351344 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ef2cb49999016361944f43b35726e7958cdbb5c053b13468576fb0310a4443b9/kubepods/burstable/pod0dc6cd7ea41134ea82a981b863ebbe4f/84fe69b4298a3ba9135f58b5a25051c2a09cc552fb0297544dcbf4904ca4deb2/freezer.state
	I0909 11:59:19.081912  351344 api_server.go:204] freezer state: "THAWED"
	I0909 11:59:19.081945  351344 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0909 11:59:19.090489  351344 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0909 11:59:19.090522  351344 status.go:422] ha-036926 apiserver status = Running (err=<nil>)
	I0909 11:59:19.090534  351344 status.go:257] ha-036926 status: &{Name:ha-036926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:59:19.090552  351344 status.go:255] checking status of ha-036926-m02 ...
	I0909 11:59:19.090877  351344 cli_runner.go:164] Run: docker container inspect ha-036926-m02 --format={{.State.Status}}
	I0909 11:59:19.108080  351344 status.go:330] ha-036926-m02 host status = "Stopped" (err=<nil>)
	I0909 11:59:19.108100  351344 status.go:343] host is not running, skipping remaining checks
	I0909 11:59:19.108107  351344 status.go:257] ha-036926-m02 status: &{Name:ha-036926-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:59:19.108128  351344 status.go:255] checking status of ha-036926-m03 ...
	I0909 11:59:19.108446  351344 cli_runner.go:164] Run: docker container inspect ha-036926-m03 --format={{.State.Status}}
	I0909 11:59:19.124952  351344 status.go:330] ha-036926-m03 host status = "Running" (err=<nil>)
	I0909 11:59:19.124977  351344 host.go:66] Checking if "ha-036926-m03" exists ...
	I0909 11:59:19.125327  351344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-036926-m03
	I0909 11:59:19.144187  351344 host.go:66] Checking if "ha-036926-m03" exists ...
	I0909 11:59:19.144486  351344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 11:59:19.144525  351344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-036926-m03
	I0909 11:59:19.166887  351344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/ha-036926-m03/id_rsa Username:docker}
	I0909 11:59:19.255315  351344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 11:59:19.268096  351344 kubeconfig.go:125] found "ha-036926" server: "https://192.168.49.254:8443"
	I0909 11:59:19.268123  351344 api_server.go:166] Checking apiserver status ...
	I0909 11:59:19.268168  351344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0909 11:59:19.279702  351344 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup
	I0909 11:59:19.289994  351344 api_server.go:182] apiserver freezer: "9:freezer:/docker/b28b45edebd99a2ee33d962178e60e2253bbc57012d51d23f4665e489e251281/kubepods/burstable/pod34d4cfa5a84e24e507735500708b5f15/786ba73f4349b805a19c5124acb24a2a8e49241fb543d04f0d4ae41c1f6b4bff"
	I0909 11:59:19.290087  351344 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b28b45edebd99a2ee33d962178e60e2253bbc57012d51d23f4665e489e251281/kubepods/burstable/pod34d4cfa5a84e24e507735500708b5f15/786ba73f4349b805a19c5124acb24a2a8e49241fb543d04f0d4ae41c1f6b4bff/freezer.state
	I0909 11:59:19.300653  351344 api_server.go:204] freezer state: "THAWED"
	I0909 11:59:19.300689  351344 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0909 11:59:19.308726  351344 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0909 11:59:19.308756  351344 status.go:422] ha-036926-m03 apiserver status = Running (err=<nil>)
	I0909 11:59:19.308775  351344 status.go:257] ha-036926-m03 status: &{Name:ha-036926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:59:19.308795  351344 status.go:255] checking status of ha-036926-m04 ...
	I0909 11:59:19.309134  351344 cli_runner.go:164] Run: docker container inspect ha-036926-m04 --format={{.State.Status}}
	I0909 11:59:19.325740  351344 status.go:330] ha-036926-m04 host status = "Running" (err=<nil>)
	I0909 11:59:19.325773  351344 host.go:66] Checking if "ha-036926-m04" exists ...
	I0909 11:59:19.326162  351344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-036926-m04
	I0909 11:59:19.343673  351344 host.go:66] Checking if "ha-036926-m04" exists ...
	I0909 11:59:19.344016  351344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 11:59:19.344063  351344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-036926-m04
	I0909 11:59:19.363094  351344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/ha-036926-m04/id_rsa Username:docker}
	I0909 11:59:19.455415  351344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 11:59:19.468544  351344 status.go:257] ha-036926-m04 status: &{Name:ha-036926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-036926 node start m02 -v=7 --alsologtostderr: (17.292301906s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr: (1.132099829s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-036926 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-036926 -v=7 --alsologtostderr
E0909 11:59:54.462604  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:59:54.469060  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:59:54.480512  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:59:54.502230  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:59:54.543597  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:59:54.625043  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:59:54.786698  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:59:55.108506  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:59:55.750791  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:59:57.032779  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:59:59.594611  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:00:04.716933  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-036926 -v=7 --alsologtostderr: (26.462862297s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-036926 --wait=true -v=7 --alsologtostderr
E0909 12:00:14.959135  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:00:35.441465  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:00:43.371956  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:01:11.083979  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:01:16.403551  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-036926 --wait=true -v=7 --alsologtostderr: (1m36.956333756s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-036926
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-036926 node delete m03 -v=7 --alsologtostderr: (9.820702123s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-036926 stop -v=7 --alsologtostderr: (36.034867521s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr: exit status 7 (113.064318ms)

                                                
                                                
-- stdout --
	ha-036926
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-036926-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-036926-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 12:02:30.371652  365124 out.go:345] Setting OutFile to fd 1 ...
	I0909 12:02:30.371794  365124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 12:02:30.371806  365124 out.go:358] Setting ErrFile to fd 2...
	I0909 12:02:30.371811  365124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 12:02:30.372101  365124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
	I0909 12:02:30.372291  365124 out.go:352] Setting JSON to false
	I0909 12:02:30.372333  365124 mustload.go:65] Loading cluster: ha-036926
	I0909 12:02:30.372412  365124 notify.go:220] Checking for updates...
	I0909 12:02:30.372777  365124 config.go:182] Loaded profile config "ha-036926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0909 12:02:30.372796  365124 status.go:255] checking status of ha-036926 ...
	I0909 12:02:30.373290  365124 cli_runner.go:164] Run: docker container inspect ha-036926 --format={{.State.Status}}
	I0909 12:02:30.392821  365124 status.go:330] ha-036926 host status = "Stopped" (err=<nil>)
	I0909 12:02:30.392844  365124 status.go:343] host is not running, skipping remaining checks
	I0909 12:02:30.392853  365124 status.go:257] ha-036926 status: &{Name:ha-036926 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 12:02:30.392882  365124 status.go:255] checking status of ha-036926-m02 ...
	I0909 12:02:30.393188  365124 cli_runner.go:164] Run: docker container inspect ha-036926-m02 --format={{.State.Status}}
	I0909 12:02:30.418816  365124 status.go:330] ha-036926-m02 host status = "Stopped" (err=<nil>)
	I0909 12:02:30.418891  365124 status.go:343] host is not running, skipping remaining checks
	I0909 12:02:30.418912  365124 status.go:257] ha-036926-m02 status: &{Name:ha-036926-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 12:02:30.418931  365124 status.go:255] checking status of ha-036926-m04 ...
	I0909 12:02:30.419234  365124 cli_runner.go:164] Run: docker container inspect ha-036926-m04 --format={{.State.Status}}
	I0909 12:02:30.436530  365124 status.go:330] ha-036926-m04 host status = "Stopped" (err=<nil>)
	I0909 12:02:30.436552  365124 status.go:343] host is not running, skipping remaining checks
	I0909 12:02:30.436559  365124 status.go:257] ha-036926-m04 status: &{Name:ha-036926-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-036926 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0909 12:02:38.325012  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-036926 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.54015051s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-036926 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-036926 --control-plane -v=7 --alsologtostderr: (43.44918974s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-036926 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-261244 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0909 12:04:54.462775  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:05:22.166395  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-261244 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (51.896154713s)
--- PASS: TestJSONOutput/start/Command (51.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-261244 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-261244 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-261244 --output=json --user=testUser
E0909 12:05:43.371727  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-261244 --output=json --user=testUser: (5.742981599s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-401358 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-401358 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.305594ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"02b491df-2789-439a-8623-121e2616e9b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-401358] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9ed1cc3-210c-4726-b978-be494eb0d439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19584"}}
	{"specversion":"1.0","id":"50e3585d-a90c-4d15-bb9d-c89876f68dc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6c2f6c76-4269-4a89-a29b-e25d16b9ed90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig"}}
	{"specversion":"1.0","id":"d2245f23-9a85-4a71-8b8c-b2ae6eb57793","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube"}}
	{"specversion":"1.0","id":"27f980f8-48a8-4f6b-913b-1edbe04e94d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1b467de9-1c2d-432f-b728-5eeb9917186a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"930a3a77-d491-4158-b82b-df2fa79f648c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-401358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-401358
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-988713 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-988713 --network=: (37.481417679s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-988713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-988713
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-988713: (2.101406337s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-437137 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-437137 --network=bridge: (30.750051702s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-437137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-437137
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-437137: (2.253382439s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.02s)

                                                
                                    
x
+
TestKicExistingNetwork (30.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-796930 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-796930 --network=existing-network: (28.884321148s)
helpers_test.go:175: Cleaning up "existing-network-796930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-796930
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-796930: (1.893552775s)
--- PASS: TestKicExistingNetwork (30.94s)

                                                
                                    
x
+
TestKicCustomSubnet (34.81s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-754817 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-754817 --subnet=192.168.60.0/24: (32.698968915s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-754817 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-754817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-754817
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-754817: (2.082269159s)
--- PASS: TestKicCustomSubnet (34.81s)

                                                
                                    
x
+
TestKicStaticIP (35.83s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-261557 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-261557 --static-ip=192.168.200.200: (33.532198097s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-261557 ip
helpers_test.go:175: Cleaning up "static-ip-261557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-261557
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-261557: (2.156378343s)
--- PASS: TestKicStaticIP (35.83s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-782224 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-782224 --driver=docker  --container-runtime=containerd: (35.552724533s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-785038 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-785038 --driver=docker  --container-runtime=containerd: (30.518330448s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-782224
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-785038
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-785038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-785038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-785038: (2.173196842s)
helpers_test.go:175: Cleaning up "first-782224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-782224
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-782224: (2.248403945s)
--- PASS: TestMinikubeProfile (71.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-039465 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0909 12:09:54.462238  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-039465 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.184098479s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-039465 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-052593 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-052593 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.838481137s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-052593 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-039465 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-039465 --alsologtostderr -v=5: (1.667085945s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-052593 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-052593
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-052593: (1.195038978s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.33s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-052593
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-052593: (6.326082556s)
--- PASS: TestMountStart/serial/RestartStopped (7.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-052593 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-439497 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0909 12:10:43.372440  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-439497 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.855585533s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-439497 -- rollout status deployment/busybox: (15.779782862s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- exec busybox-7dff88458-4ck6q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- exec busybox-7dff88458-xnp2x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- exec busybox-7dff88458-4ck6q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- exec busybox-7dff88458-xnp2x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- exec busybox-7dff88458-4ck6q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- exec busybox-7dff88458-xnp2x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- exec busybox-7dff88458-4ck6q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- exec busybox-7dff88458-4ck6q -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- exec busybox-7dff88458-xnp2x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-439497 -- exec busybox-7dff88458-xnp2x -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-439497 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-439497 -v 3 --alsologtostderr: (16.591070971s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-439497 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp testdata/cp-test.txt multinode-439497:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp multinode-439497:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1883581511/001/cp-test_multinode-439497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp multinode-439497:/home/docker/cp-test.txt multinode-439497-m02:/home/docker/cp-test_multinode-439497_multinode-439497-m02.txt
E0909 12:12:06.445506  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m02 "sudo cat /home/docker/cp-test_multinode-439497_multinode-439497-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp multinode-439497:/home/docker/cp-test.txt multinode-439497-m03:/home/docker/cp-test_multinode-439497_multinode-439497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m03 "sudo cat /home/docker/cp-test_multinode-439497_multinode-439497-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp testdata/cp-test.txt multinode-439497-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp multinode-439497-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1883581511/001/cp-test_multinode-439497-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp multinode-439497-m02:/home/docker/cp-test.txt multinode-439497:/home/docker/cp-test_multinode-439497-m02_multinode-439497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497 "sudo cat /home/docker/cp-test_multinode-439497-m02_multinode-439497.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp multinode-439497-m02:/home/docker/cp-test.txt multinode-439497-m03:/home/docker/cp-test_multinode-439497-m02_multinode-439497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m03 "sudo cat /home/docker/cp-test_multinode-439497-m02_multinode-439497-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp testdata/cp-test.txt multinode-439497-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp multinode-439497-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1883581511/001/cp-test_multinode-439497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp multinode-439497-m03:/home/docker/cp-test.txt multinode-439497:/home/docker/cp-test_multinode-439497-m03_multinode-439497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497 "sudo cat /home/docker/cp-test_multinode-439497-m03_multinode-439497.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 cp multinode-439497-m03:/home/docker/cp-test.txt multinode-439497-m02:/home/docker/cp-test_multinode-439497-m03_multinode-439497-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 ssh -n multinode-439497-m02 "sudo cat /home/docker/cp-test_multinode-439497-m03_multinode-439497-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-439497 node stop m03: (1.224716843s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-439497 status: exit status 7 (520.512926ms)

                                                
                                                
-- stdout --
	multinode-439497
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-439497-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-439497-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-439497 status --alsologtostderr: exit status 7 (509.956707ms)

                                                
                                                
-- stdout --
	multinode-439497
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-439497-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-439497-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 12:12:16.383962  418560 out.go:345] Setting OutFile to fd 1 ...
	I0909 12:12:16.384177  418560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 12:12:16.384204  418560 out.go:358] Setting ErrFile to fd 2...
	I0909 12:12:16.384222  418560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 12:12:16.384527  418560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
	I0909 12:12:16.384767  418560 out.go:352] Setting JSON to false
	I0909 12:12:16.384841  418560 mustload.go:65] Loading cluster: multinode-439497
	I0909 12:12:16.385298  418560 config.go:182] Loaded profile config "multinode-439497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0909 12:12:16.385378  418560 status.go:255] checking status of multinode-439497 ...
	I0909 12:12:16.385865  418560 notify.go:220] Checking for updates...
	I0909 12:12:16.386012  418560 cli_runner.go:164] Run: docker container inspect multinode-439497 --format={{.State.Status}}
	I0909 12:12:16.405421  418560 status.go:330] multinode-439497 host status = "Running" (err=<nil>)
	I0909 12:12:16.405448  418560 host.go:66] Checking if "multinode-439497" exists ...
	I0909 12:12:16.405759  418560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439497
	I0909 12:12:16.428976  418560 host.go:66] Checking if "multinode-439497" exists ...
	I0909 12:12:16.429295  418560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 12:12:16.429365  418560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439497
	I0909 12:12:16.451123  418560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/multinode-439497/id_rsa Username:docker}
	I0909 12:12:16.542769  418560 ssh_runner.go:195] Run: systemctl --version
	I0909 12:12:16.547138  418560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 12:12:16.558900  418560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 12:12:16.615029  418560 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-09 12:12:16.60391544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 12:12:16.615699  418560 kubeconfig.go:125] found "multinode-439497" server: "https://192.168.67.2:8443"
	I0909 12:12:16.615736  418560 api_server.go:166] Checking apiserver status ...
	I0909 12:12:16.615781  418560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0909 12:12:16.627875  418560 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1430/cgroup
	I0909 12:12:16.637772  418560 api_server.go:182] apiserver freezer: "9:freezer:/docker/51a2f05dbea4a5b7cadc5c1dff1ceff7fe7ad17a4843a384fc41aefd09ca0b49/kubepods/burstable/pod4fc1ba0e722c1539f40d80409f60da5d/e61eaaa0e210976be769e943d823d8436aebf2ecbb30761043cfe1495381d9bd"
	I0909 12:12:16.637949  418560 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/51a2f05dbea4a5b7cadc5c1dff1ceff7fe7ad17a4843a384fc41aefd09ca0b49/kubepods/burstable/pod4fc1ba0e722c1539f40d80409f60da5d/e61eaaa0e210976be769e943d823d8436aebf2ecbb30761043cfe1495381d9bd/freezer.state
	I0909 12:12:16.647370  418560 api_server.go:204] freezer state: "THAWED"
	I0909 12:12:16.647470  418560 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0909 12:12:16.655152  418560 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0909 12:12:16.655180  418560 status.go:422] multinode-439497 apiserver status = Running (err=<nil>)
	I0909 12:12:16.655191  418560 status.go:257] multinode-439497 status: &{Name:multinode-439497 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 12:12:16.655209  418560 status.go:255] checking status of multinode-439497-m02 ...
	I0909 12:12:16.655542  418560 cli_runner.go:164] Run: docker container inspect multinode-439497-m02 --format={{.State.Status}}
	I0909 12:12:16.672276  418560 status.go:330] multinode-439497-m02 host status = "Running" (err=<nil>)
	I0909 12:12:16.672301  418560 host.go:66] Checking if "multinode-439497-m02" exists ...
	I0909 12:12:16.672611  418560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439497-m02
	I0909 12:12:16.689010  418560 host.go:66] Checking if "multinode-439497-m02" exists ...
	I0909 12:12:16.689390  418560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 12:12:16.689446  418560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439497-m02
	I0909 12:12:16.706328  418560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/19584-293351/.minikube/machines/multinode-439497-m02/id_rsa Username:docker}
	I0909 12:12:16.794381  418560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 12:12:16.817260  418560 status.go:257] multinode-439497-m02 status: &{Name:multinode-439497-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0909 12:12:16.817295  418560 status.go:255] checking status of multinode-439497-m03 ...
	I0909 12:12:16.817728  418560 cli_runner.go:164] Run: docker container inspect multinode-439497-m03 --format={{.State.Status}}
	I0909 12:12:16.835498  418560 status.go:330] multinode-439497-m03 host status = "Stopped" (err=<nil>)
	I0909 12:12:16.835523  418560 status.go:343] host is not running, skipping remaining checks
	I0909 12:12:16.835531  418560 status.go:257] multinode-439497-m03 status: &{Name:multinode-439497-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-439497 node start m03 -v=7 --alsologtostderr: (9.311618772s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (107.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-439497
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-439497
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-439497: (25.011450533s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-439497 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-439497 --wait=true -v=8 --alsologtostderr: (1m22.186345385s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-439497
--- PASS: TestMultiNode/serial/RestartKeepsNodes (107.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-439497 node delete m03: (4.925081345s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-439497 stop: (23.862317276s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-439497 status: exit status 7 (92.25632ms)

                                                
                                                
-- stdout --
	multinode-439497
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-439497-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-439497 status --alsologtostderr: exit status 7 (96.465433ms)

                                                
                                                
-- stdout --
	multinode-439497
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-439497-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 12:14:43.858327  427040 out.go:345] Setting OutFile to fd 1 ...
	I0909 12:14:43.858432  427040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 12:14:43.858439  427040 out.go:358] Setting ErrFile to fd 2...
	I0909 12:14:43.858444  427040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 12:14:43.858700  427040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
	I0909 12:14:43.858893  427040 out.go:352] Setting JSON to false
	I0909 12:14:43.858936  427040 mustload.go:65] Loading cluster: multinode-439497
	I0909 12:14:43.858979  427040 notify.go:220] Checking for updates...
	I0909 12:14:43.859378  427040 config.go:182] Loaded profile config "multinode-439497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0909 12:14:43.859389  427040 status.go:255] checking status of multinode-439497 ...
	I0909 12:14:43.859901  427040 cli_runner.go:164] Run: docker container inspect multinode-439497 --format={{.State.Status}}
	I0909 12:14:43.883137  427040 status.go:330] multinode-439497 host status = "Stopped" (err=<nil>)
	I0909 12:14:43.883162  427040 status.go:343] host is not running, skipping remaining checks
	I0909 12:14:43.883169  427040 status.go:257] multinode-439497 status: &{Name:multinode-439497 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 12:14:43.883205  427040 status.go:255] checking status of multinode-439497-m02 ...
	I0909 12:14:43.883527  427040 cli_runner.go:164] Run: docker container inspect multinode-439497-m02 --format={{.State.Status}}
	I0909 12:14:43.907602  427040 status.go:330] multinode-439497-m02 host status = "Stopped" (err=<nil>)
	I0909 12:14:43.907628  427040 status.go:343] host is not running, skipping remaining checks
	I0909 12:14:43.907644  427040 status.go:257] multinode-439497-m02 status: &{Name:multinode-439497-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-439497 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0909 12:14:54.462233  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-439497 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (46.551352162s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-439497 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-439497
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-439497-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-439497-m02 --driver=docker  --container-runtime=containerd: exit status 14 (78.625644ms)

                                                
                                                
-- stdout --
	* [multinode-439497-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-439497-m02' is duplicated with machine name 'multinode-439497-m02' in profile 'multinode-439497'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-439497-m03 --driver=docker  --container-runtime=containerd
E0909 12:15:43.371538  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-439497-m03 --driver=docker  --container-runtime=containerd: (31.342230141s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-439497
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-439497: exit status 80 (313.2808ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-439497 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-439497-m03 already exists in multinode-439497-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-439497-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-439497-m03: (1.996423077s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.80s)

                                                
                                    
x
+
TestPreload (121.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-096249 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0909 12:16:17.527878  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-096249 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m22.14312754s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-096249 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-096249 image pull gcr.io/k8s-minikube/busybox: (1.854974324s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-096249
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-096249: (12.206190132s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-096249 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-096249 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.962354262s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-096249 image list
helpers_test.go:175: Cleaning up "test-preload-096249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-096249
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-096249: (2.5284186s)
--- PASS: TestPreload (121.18s)

                                                
                                    
x
+
TestScheduledStopUnix (109.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-078470 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-078470 --memory=2048 --driver=docker  --container-runtime=containerd: (33.208010784s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-078470 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-078470 -n scheduled-stop-078470
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-078470 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-078470 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-078470 -n scheduled-stop-078470
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-078470
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-078470 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0909 12:19:54.462226  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-078470
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-078470: exit status 7 (70.422774ms)

                                                
                                                
-- stdout --
	scheduled-stop-078470
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-078470 -n scheduled-stop-078470
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-078470 -n scheduled-stop-078470: exit status 7 (62.790755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-078470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-078470
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-078470: (5.20825431s)
--- PASS: TestScheduledStopUnix (109.99s)

                                                
                                    
x
+
TestInsufficientStorage (10.79s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-090456 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-090456 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.285621634s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ef0cd75c-8079-44c2-a42b-2fe11dd08835","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-090456] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b641dd5-b653-44fa-b0dd-e1c40a3594de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19584"}}
	{"specversion":"1.0","id":"7d72c36b-50c9-4535-9acd-197c2480b6d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"491b9392-2a5e-45c8-bb43-d256306fe6a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig"}}
	{"specversion":"1.0","id":"91f93e99-1f88-473f-82a4-554a6eaf3b44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube"}}
	{"specversion":"1.0","id":"0686ce41-25f3-4c8d-80eb-69b3dd9e59d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"baeb0adc-f6b1-4f0a-9f65-3f7cd4c22422","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"992304b3-1a3d-49c1-a998-d0ca34af9400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"386e523c-165a-4381-80f3-2980b0b4b02a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9c01787e-ae42-4cd7-85d6-a6e04c06ec7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e07aefe-e23f-470f-8993-a250dcb90133","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"cbf39a75-bf4a-412c-aaa5-5566ae0350b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-090456\" primary control-plane node in \"insufficient-storage-090456\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"218bd59c-652d-4f2a-9b6c-8207ded53f79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fce34f0-1530-45b4-b3ad-3f9b56bfcf07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a669a96-142b-4b55-a409-aeff9f577bc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-090456 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-090456 --output=json --layout=cluster: exit status 7 (279.245063ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-090456","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-090456","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0909 12:20:08.674089  445743 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-090456" does not appear in /home/jenkins/minikube-integration/19584-293351/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-090456 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-090456 --output=json --layout=cluster: exit status 7 (290.336997ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-090456","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-090456","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0909 12:20:08.964674  445804 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-090456" does not appear in /home/jenkins/minikube-integration/19584-293351/kubeconfig
	E0909 12:20:08.975402  445804 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/insufficient-storage-090456/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-090456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-090456
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-090456: (1.938792606s)
--- PASS: TestInsufficientStorage (10.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (86.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2812968677 start -p running-upgrade-099959 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0909 12:25:43.372811  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2812968677 start -p running-upgrade-099959 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.760459197s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-099959 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-099959 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.822462399s)
helpers_test.go:175: Cleaning up "running-upgrade-099959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-099959
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-099959: (2.805329271s)
--- PASS: TestRunningBinaryUpgrade (86.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (352.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-405863 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-405863 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.56201948s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-405863
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-405863: (1.213065342s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-405863 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-405863 status --format={{.Host}}: exit status 7 (69.003433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-405863 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-405863 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.519577386s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-405863 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-405863 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-405863 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (121.667316ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-405863] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-405863
	    minikube start -p kubernetes-upgrade-405863 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4058632 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-405863 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-405863 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-405863 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.518776676s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-405863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-405863
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-405863: (2.318472095s)
--- PASS: TestKubernetesUpgrade (352.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (193.8s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3712464399 start -p missing-upgrade-219474 --memory=2200 --driver=docker  --container-runtime=containerd
E0909 12:20:43.372371  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3712464399 start -p missing-upgrade-219474 --memory=2200 --driver=docker  --container-runtime=containerd: (1m27.924187792s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-219474
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-219474: (10.32135939s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-219474
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-219474 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-219474 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m32.576184449s)
helpers_test.go:175: Cleaning up "missing-upgrade-219474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-219474
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-219474: (2.085223327s)
--- PASS: TestMissingContainerUpgrade (193.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-550105 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-550105 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (88.032915ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-550105] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-550105 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-550105 --driver=docker  --container-runtime=containerd: (37.4680889s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-550105 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-550105 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-550105 --no-kubernetes --driver=docker  --container-runtime=containerd: (19.588275895s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-550105 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-550105 status -o json: exit status 2 (303.555091ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-550105","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-550105
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-550105: (1.948152254s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-550105 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-550105 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.054845133s)
--- PASS: TestNoKubernetes/serial/Start (9.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-550105 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-550105 "sudo systemctl is-active --quiet service kubelet": exit status 1 (390.517555ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-550105
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-550105: (1.277692294s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-550105 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-550105 --driver=docker  --container-runtime=containerd: (7.413585224s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-550105 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-550105 "sudo systemctl is-active --quiet service kubelet": exit status 1 (329.897322ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3512898522 start -p stopped-upgrade-819831 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3512898522 start -p stopped-upgrade-819831 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.441923394s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3512898522 -p stopped-upgrade-819831 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3512898522 -p stopped-upgrade-819831 stop: (19.954101015s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-819831 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0909 12:24:54.466047  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-819831 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.140901215s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-819831
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-819831: (1.235787772s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestPause/serial/Start (66.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-962785 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-962785 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m6.550980642s)
--- PASS: TestPause/serial/Start (66.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-962785 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-962785 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.245418765s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.26s)

                                                
                                    
x
+
TestPause/serial/Pause (1.28s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-962785 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-962785 --alsologtostderr -v=5: (1.278809683s)
--- PASS: TestPause/serial/Pause (1.28s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-962785 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-962785 --output=json --layout=cluster: exit status 2 (428.428383ms)

                                                
                                                
-- stdout --
	{"Name":"pause-962785","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-962785","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-962785 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-987843 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-987843 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (252.955505ms)

                                                
                                                
-- stdout --
	* [false-987843] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 12:28:01.939258  486526 out.go:345] Setting OutFile to fd 1 ...
	I0909 12:28:01.939418  486526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 12:28:01.939427  486526 out.go:358] Setting ErrFile to fd 2...
	I0909 12:28:01.939438  486526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 12:28:01.939675  486526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-293351/.minikube/bin
	I0909 12:28:01.940110  486526 out.go:352] Setting JSON to false
	I0909 12:28:01.941070  486526 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7820,"bootTime":1725877062,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0909 12:28:01.941152  486526 start.go:139] virtualization:  
	I0909 12:28:01.943687  486526 out.go:177] * [false-987843] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0909 12:28:01.947182  486526 out.go:177]   - MINIKUBE_LOCATION=19584
	I0909 12:28:01.947267  486526 notify.go:220] Checking for updates...
	I0909 12:28:01.951546  486526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 12:28:01.953376  486526 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19584-293351/kubeconfig
	I0909 12:28:01.955273  486526 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-293351/.minikube
	I0909 12:28:01.957113  486526 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0909 12:28:01.959196  486526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0909 12:28:01.961889  486526 config.go:182] Loaded profile config "pause-962785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0909 12:28:01.962121  486526 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 12:28:02.002284  486526 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 12:28:02.002417  486526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 12:28:02.105332  486526 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-09-09 12:28:02.093157565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0909 12:28:02.105624  486526 docker.go:307] overlay module found
	I0909 12:28:02.112855  486526 out.go:177] * Using the docker driver based on user configuration
	I0909 12:28:02.115237  486526 start.go:297] selected driver: docker
	I0909 12:28:02.115256  486526 start.go:901] validating driver "docker" against <nil>
	I0909 12:28:02.115269  486526 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0909 12:28:02.117801  486526 out.go:201] 
	W0909 12:28:02.119655  486526 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0909 12:28:02.121998  486526 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-987843 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-987843" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19584-293351/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 09 Sep 2024 12:27:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-962785
contexts:
- context:
cluster: pause-962785
extensions:
- extension:
last-update: Mon, 09 Sep 2024 12:27:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-962785
name: pause-962785
current-context: pause-962785
kind: Config
preferences: {}
users:
- name: pause-962785
user:
client-certificate: /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/pause-962785/client.crt
client-key: /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/pause-962785/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-987843

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987843"

                                                
                                                
----------------------- debugLogs end: false-987843 [took: 4.643947088s] --------------------------------
helpers_test.go:175: Cleaning up "false-987843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-987843
--- PASS: TestNetworkPlugins/group/false (5.13s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.17s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-962785 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-962785 --alsologtostderr -v=5: (1.167613849s)
--- PASS: TestPause/serial/PauseAgain (1.17s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-962785 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-962785 --alsologtostderr -v=5: (2.965681552s)
--- PASS: TestPause/serial/DeletePaused (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-962785
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-962785: exit status 1 (19.514763ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-962785: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-532490 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0909 12:29:54.462836  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:30:43.371591  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-532490 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m29.132095805s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-532490 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1934aedf-c7c4-4784-be86-a0d63d61be3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1934aedf-c7c4-4784-be86-a0d63d61be3e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004967425s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-532490 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-434774 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-434774 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m3.374057294s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-532490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-532490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.327249633s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-532490 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-532490 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-532490 --alsologtostderr -v=3: (13.520632167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-532490 -n old-k8s-version-532490
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-532490 -n old-k8s-version-532490: exit status 7 (102.523224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-532490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (306.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-532490 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0909 12:32:57.529492  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-532490 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (5m6.500996308s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-532490 -n old-k8s-version-532490
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (306.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-434774 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [79779e3e-972d-4b51-ac53-9367db9c4f12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [79779e3e-972d-4b51-ac53-9367db9c4f12] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00958989s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-434774 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-434774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-434774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.510669736s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-434774 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-434774 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-434774 --alsologtostderr -v=3: (12.37102559s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-434774 -n no-preload-434774
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-434774 -n no-preload-434774: exit status 7 (82.827727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-434774 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (269.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-434774 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0909 12:34:54.462145  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:35:43.371749  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-434774 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m28.563317884s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-434774 -n no-preload-434774
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (269.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kjdlx" [60707428-30c8-4afc-80ee-e62039a39231] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00439806s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kjdlx" [60707428-30c8-4afc-80ee-e62039a39231] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005165057s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-532490 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-532490 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-532490 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-532490 -n old-k8s-version-532490
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-532490 -n old-k8s-version-532490: exit status 2 (379.58748ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-532490 -n old-k8s-version-532490
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-532490 -n old-k8s-version-532490: exit status 2 (345.903953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-532490 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-532490 -n old-k8s-version-532490
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-532490 -n old-k8s-version-532490
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (64.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-526898 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-526898 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m4.872465862s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (64.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jg6jt" [dd930ce9-7cc9-434f-a4f9-ef92f0907afb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004664537s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jg6jt" [dd930ce9-7cc9-434f-a4f9-ef92f0907afb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003723595s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-434774 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-434774 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-434774 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-434774 --alsologtostderr -v=1: (1.399667657s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-434774 -n no-preload-434774
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-434774 -n no-preload-434774: exit status 2 (438.067293ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-434774 -n no-preload-434774
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-434774 -n no-preload-434774: exit status 2 (383.904515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-434774 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-434774 -n no-preload-434774
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-434774 -n no-preload-434774
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-160313 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-160313 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m1.999193594s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-526898 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b521ded8-1c5a-46dd-a75f-902dcb9f74c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b521ded8-1c5a-46dd-a75f-902dcb9f74c2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003716786s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-526898 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-526898 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-526898 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.049191308s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-526898 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-526898 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-526898 --alsologtostderr -v=3: (12.207604607s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-526898 -n embed-certs-526898
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-526898 -n embed-certs-526898: exit status 7 (67.937018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-526898 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-526898 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-526898 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m28.432964171s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-526898 -n embed-certs-526898
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-160313 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [36e52045-d6b7-4a5b-8dc8-455306fa14fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [36e52045-d6b7-4a5b-8dc8-455306fa14fc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004293956s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-160313 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-160313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-160313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.548722929s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-160313 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-160313 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-160313 --alsologtostderr -v=3: (12.845753729s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-160313 -n default-k8s-diff-port-160313
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-160313 -n default-k8s-diff-port-160313: exit status 7 (72.71376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-160313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-160313 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0909 12:39:54.463599  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:40:43.372052  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:41:56.845883  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:41:56.852407  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:41:56.863795  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:41:56.885295  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:41:56.926859  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:41:57.011682  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:41:57.173329  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:41:57.495104  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:41:58.137074  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:41:59.418724  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:42:01.980748  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:42:07.103189  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:42:17.344500  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:42:37.825833  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:08.735422  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:08.741877  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:08.753425  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:08.775042  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:08.816488  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:08.897896  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:09.059446  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:09.381141  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:10.023337  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:11.306919  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:13.868871  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:18.788100  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:18.990538  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:43:29.232756  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-160313 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m50.780181932s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-160313 -n default-k8s-diff-port-160313
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-26mx4" [5f19fd4d-9fec-40f7-a217-33eddcf1c5b3] Running
E0909 12:43:49.714234  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003676186s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-26mx4" [5f19fd4d-9fec-40f7-a217-33eddcf1c5b3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004207209s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-526898 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-526898 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-526898 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-526898 -n embed-certs-526898
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-526898 -n embed-certs-526898: exit status 2 (339.144548ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-526898 -n embed-certs-526898
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-526898 -n embed-certs-526898: exit status 2 (346.906671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-526898 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-526898 -n embed-certs-526898
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-526898 -n embed-certs-526898
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-735028 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0909 12:44:30.676118  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-735028 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (41.277025036s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4p6qk" [5d5238d0-35d7-43fc-b36a-d526e148507e] Running
E0909 12:44:40.709926  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004872664s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-735028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-735028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.401000455s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4p6qk" [5d5238d0-35d7-43fc-b36a-d526e148507e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003575148s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-160313 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-735028 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-735028 --alsologtostderr -v=3: (1.540750078s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-735028 -n newest-cni-735028
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-735028 -n newest-cni-735028: exit status 7 (76.673218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-735028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-735028 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-735028 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (21.086536098s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-735028 -n newest-cni-735028
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-160313 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-160313 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-160313 -n default-k8s-diff-port-160313
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-160313 -n default-k8s-diff-port-160313: exit status 2 (343.77134ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-160313 -n default-k8s-diff-port-160313
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-160313 -n default-k8s-diff-port-160313: exit status 2 (407.498402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-160313 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-160313 --alsologtostderr -v=1: (1.02311137s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-160313 -n default-k8s-diff-port-160313
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-160313 -n default-k8s-diff-port-160313
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m13.331613555s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-735028 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-735028 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-735028 -n newest-cni-735028
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-735028 -n newest-cni-735028: exit status 2 (393.604509ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-735028 -n newest-cni-735028
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-735028 -n newest-cni-735028: exit status 2 (471.678055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-735028 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-735028 -n newest-cni-735028
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-735028 -n newest-cni-735028
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.71s)
E0909 12:50:43.371933  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:50:45.952037  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:11.780413  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:11.786827  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:11.798242  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:11.819742  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:11.861218  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:11.942631  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:12.104372  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:12.426171  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:13.067632  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:14.349549  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:16.361510  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:16.367885  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:16.379249  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:16.400676  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:16.442095  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:16.524138  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:16.685777  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:16.911589  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:17.009492  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:17.651365  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:18.933041  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:21.495979  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:22.032996  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:26.617790  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:32.275282  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:51:36.859135  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/kindnet-987843/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0909 12:45:26.453423  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:45:43.371945  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/addons-630724/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:45:52.597489  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m1.53712424s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-987843 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-987843 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x84zd" [90ac8e60-90f3-48a3-b119-4a4041e9e3c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x84zd" [90ac8e60-90f3-48a3-b119-4a4041e9e3c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003386927s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-x4kdb" [3c832cff-777e-47bf-8d66-3783c572d208] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004091315s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-987843 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-987843 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-987843 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fgqjx" [db1dc721-ec5d-499a-86a9-507fc54a3e36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fgqjx" [db1dc721-ec5d-499a-86a9-507fc54a3e36] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.016143504s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-987843 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m16.786210027s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0909 12:47:24.552146  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/old-k8s-version-532490/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.047722704s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-987843 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-987843 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dgc8f" [93841fc9-f07e-4085-a6c8-a7424694c496] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dgc8f" [93841fc9-f07e-4085-a6c8-a7424694c496] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004160263s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-q85zg" [2cbbc681-bc86-45cb-92c7-bcdca7fc121c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005196601s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-987843 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-987843 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2qh2f" [8bfc8a2e-0c62-442f-bdbe-be4e4d7c0136] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0909 12:48:08.735658  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-2qh2f" [8bfc8a2e-0c62-442f-bdbe-be4e4d7c0136] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005100278s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-987843 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-987843 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0909 12:48:36.439721  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/no-preload-434774/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m15.990331075s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0909 12:49:23.997056  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:24.004584  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:24.033243  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:24.054640  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:24.096089  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:24.177575  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:24.339188  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:24.661017  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:25.302363  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:26.583712  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:29.145543  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:34.266914  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
E0909 12:49:37.531240  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (58.468551086s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qpv5g" [7e39eb4e-4c33-4288-b757-f13d5a4f3ecd] Running
E0909 12:49:44.508273  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/default-k8s-diff-port-160313/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004465148s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-987843 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-987843 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wtz2k" [1e7aa1fa-3dff-486f-99fa-cbae35ba7edf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wtz2k" [1e7aa1fa-3dff-486f-99fa-cbae35ba7edf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.042347321s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-987843 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-987843 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nckr9" [5447e48a-a472-4f2a-b3cd-95cca990d228] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0909 12:49:54.462286  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/functional-649830/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nckr9" [5447e48a-a472-4f2a-b3cd-95cca990d228] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004652453s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-987843 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-987843 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-987843 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m16.116802078s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-987843 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-987843 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r5sz5" [a9491ff9-a323-4901-910a-3b7525b2e5bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-r5sz5" [a9491ff9-a323-4901-910a-3b7525b2e5bb] Running
E0909 12:51:52.757650  298741 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/auto-987843/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003857944s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-987843 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-987843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.6s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-455370 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-455370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-455370
--- SKIP: TestDownloadOnlyKic (0.60s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-693699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-693699
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-987843 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-987843" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19584-293351/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 09 Sep 2024 12:27:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-962785
contexts:
- context:
cluster: pause-962785
extensions:
- extension:
last-update: Mon, 09 Sep 2024 12:27:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-962785
name: pause-962785
current-context: pause-962785
kind: Config
preferences: {}
users:
- name: pause-962785
user:
client-certificate: /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/pause-962785/client.crt
client-key: /home/jenkins/minikube-integration/19584-293351/.minikube/profiles/pause-962785/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-987843

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987843"

                                                
                                                
----------------------- debugLogs end: kubenet-987843 [took: 4.432036317s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-987843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-987843
--- SKIP: TestNetworkPlugins/group/kubenet (4.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-987843 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-987843" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-987843

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-987843" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987843"

                                                
                                                
----------------------- debugLogs end: cilium-987843 [took: 5.542657807s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-987843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-987843
--- SKIP: TestNetworkPlugins/group/cilium (5.77s)

                                                
                                    
Copied to clipboard