Test Report: Docker_Linux_containerd_arm64 19429

                    
                      b06913c07d6338950e5c7fdbd8346c60c9653ed1:2024-08-14:35775
                    
                

Test fail (1/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.78
x
+
TestAddons/serial/Volcano (199.78s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 46.253313ms
addons_test.go:913: volcano-controller stabilized in 46.613837ms
addons_test.go:897: volcano-scheduler stabilized in 46.890882ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-7m9zn" [0ab60246-dd41-472e-b98d-1533967bc62e] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00426761s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-2s99n" [f0877282-5482-4c62-8e90-909941b9aac3] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003082147s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-8q2tt" [f9caadce-7c5b-49e1-a6a4-396ec98414dd] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003458695s
addons_test.go:932: (dbg) Run:  kubectl --context addons-785001 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-785001 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-785001 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [3dee5023-9fa9-44cb-bc56-54770e265840] Pending
helpers_test.go:344: "test-job-nginx-0" [3dee5023-9fa9-44cb-bc56-54770e265840] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-785001 -n addons-785001
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-14 00:34:09.135186114 +0000 UTC m=+431.574488659
addons_test.go:964: (dbg) Run:  kubectl --context addons-785001 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-785001 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-68c3eacf-8cf1-46ad-9e33-1d65c2da4b12
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xcc9t (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-xcc9t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age   From     Message
----     ------            ----  ----     -------
Warning  FailedScheduling  3m    volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-785001 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-785001 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-785001
helpers_test.go:235: (dbg) docker inspect addons-785001:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df8ed7269cb6b4319e0c2d8991dd31742f1b426f68797b27d61af06a81d8c086",
	        "Created": "2024-08-14T00:27:41.063198463Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 594264,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-14T00:27:41.186740245Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dabaf166e0a0b412e211847bb414df2dbd9c8a852737ec3bb7f19e06fbc82919",
	        "ResolvConfPath": "/var/lib/docker/containers/df8ed7269cb6b4319e0c2d8991dd31742f1b426f68797b27d61af06a81d8c086/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df8ed7269cb6b4319e0c2d8991dd31742f1b426f68797b27d61af06a81d8c086/hostname",
	        "HostsPath": "/var/lib/docker/containers/df8ed7269cb6b4319e0c2d8991dd31742f1b426f68797b27d61af06a81d8c086/hosts",
	        "LogPath": "/var/lib/docker/containers/df8ed7269cb6b4319e0c2d8991dd31742f1b426f68797b27d61af06a81d8c086/df8ed7269cb6b4319e0c2d8991dd31742f1b426f68797b27d61af06a81d8c086-json.log",
	        "Name": "/addons-785001",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-785001:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-785001",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/13904c404ebd6192b271ff01598e825a0c0af880628d80b2b9279e55fc102f88-init/diff:/var/lib/docker/overlay2/90c8d510e0fe3b90f0bfc03af7c31b7493303b2d243fa4a851cce17d62028478/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13904c404ebd6192b271ff01598e825a0c0af880628d80b2b9279e55fc102f88/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13904c404ebd6192b271ff01598e825a0c0af880628d80b2b9279e55fc102f88/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13904c404ebd6192b271ff01598e825a0c0af880628d80b2b9279e55fc102f88/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-785001",
	                "Source": "/var/lib/docker/volumes/addons-785001/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-785001",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-785001",
	                "name.minikube.sigs.k8s.io": "addons-785001",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "585d9860491bc0ff3349d1fa21150675d3845a6e22fe2e28bea5f681d9251f9f",
	            "SandboxKey": "/var/run/docker/netns/585d9860491b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-785001": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "77c6f38e154601a3ef7e6cb2559483308db848ebffd42bf6071e469fb08731c2",
	                    "EndpointID": "87e44b3ae22c55ea5c5022446a42ce1d2803e1e97381a4cfca78190fae5c7d76",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-785001",
	                        "df8ed7269cb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-785001 -n addons-785001
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-785001 logs -n 25: (1.612948816s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-971583   | jenkins | v1.33.1 | 14 Aug 24 00:26 UTC |                     |
	|         | -p download-only-971583              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:27 UTC |
	| delete  | -p download-only-971583              | download-only-971583   | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:27 UTC |
	| start   | -o=json --download-only              | download-only-820317   | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC |                     |
	|         | -p download-only-820317              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:27 UTC |
	| delete  | -p download-only-820317              | download-only-820317   | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:27 UTC |
	| delete  | -p download-only-971583              | download-only-971583   | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:27 UTC |
	| delete  | -p download-only-820317              | download-only-820317   | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:27 UTC |
	| start   | --download-only -p                   | download-docker-506752 | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC |                     |
	|         | download-docker-506752               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-506752            | download-docker-506752 | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:27 UTC |
	| start   | --download-only -p                   | binary-mirror-346973   | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC |                     |
	|         | binary-mirror-346973                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37587               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-346973              | binary-mirror-346973   | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:27 UTC |
	| addons  | disable dashboard -p                 | addons-785001          | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC |                     |
	|         | addons-785001                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-785001          | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC |                     |
	|         | addons-785001                        |                        |         |         |                     |                     |
	| start   | -p addons-785001 --wait=true         | addons-785001          | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:30 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 00:27:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 00:27:16.637152  593773 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:27:16.637313  593773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:27:16.637326  593773 out.go:304] Setting ErrFile to fd 2...
	I0814 00:27:16.637331  593773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:27:16.637555  593773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
	I0814 00:27:16.638006  593773 out.go:298] Setting JSON to false
	I0814 00:27:16.638897  593773 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14981,"bootTime":1723580256,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0814 00:27:16.638969  593773 start.go:139] virtualization:  
	I0814 00:27:16.642092  593773 out.go:177] * [addons-785001] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0814 00:27:16.643710  593773 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:27:16.643845  593773 notify.go:220] Checking for updates...
	I0814 00:27:16.647000  593773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:27:16.648535  593773 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	I0814 00:27:16.650269  593773 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	I0814 00:27:16.651774  593773 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0814 00:27:16.653565  593773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:27:16.655415  593773 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:27:16.676337  593773 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 00:27:16.676461  593773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 00:27:16.739681  593773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-14 00:27:16.730079089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 00:27:16.739793  593773 docker.go:307] overlay module found
	I0814 00:27:16.741742  593773 out.go:177] * Using the docker driver based on user configuration
	I0814 00:27:16.743371  593773 start.go:297] selected driver: docker
	I0814 00:27:16.743389  593773 start.go:901] validating driver "docker" against <nil>
	I0814 00:27:16.743402  593773 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:27:16.744030  593773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 00:27:16.807791  593773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-14 00:27:16.799013143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 00:27:16.807979  593773 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 00:27:16.808215  593773 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 00:27:16.810021  593773 out.go:177] * Using Docker driver with root privileges
	I0814 00:27:16.811473  593773 cni.go:84] Creating CNI manager for ""
	I0814 00:27:16.811494  593773 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0814 00:27:16.811504  593773 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 00:27:16.811620  593773 start.go:340] cluster config:
	{Name:addons-785001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-785001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:27:16.813393  593773 out.go:177] * Starting "addons-785001" primary control-plane node in "addons-785001" cluster
	I0814 00:27:16.814799  593773 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0814 00:27:16.816424  593773 out.go:177] * Pulling base image v0.0.44-1723567951-19429 ...
	I0814 00:27:16.817935  593773 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0814 00:27:16.817984  593773 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-587614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0814 00:27:16.817990  593773 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local docker daemon
	I0814 00:27:16.818009  593773 cache.go:56] Caching tarball of preloaded images
	I0814 00:27:16.818085  593773 preload.go:172] Found /home/jenkins/minikube-integration/19429-587614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 00:27:16.818095  593773 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0814 00:27:16.818421  593773 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/config.json ...
	I0814 00:27:16.818442  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/config.json: {Name:mkf8b5fd6bc33e29011b8cb209ae2edd8a2d36ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:16.832924  593773 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 to local cache
	I0814 00:27:16.833031  593773 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory
	I0814 00:27:16.833055  593773 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory, skipping pull
	I0814 00:27:16.833071  593773 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 exists in cache, skipping pull
	I0814 00:27:16.833079  593773 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 as a tarball
	I0814 00:27:16.833085  593773 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 from local cache
	I0814 00:27:33.661960  593773 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 from cached tarball
	I0814 00:27:33.662004  593773 cache.go:194] Successfully downloaded all kic artifacts
	I0814 00:27:33.662046  593773 start.go:360] acquireMachinesLock for addons-785001: {Name:mk1175535a4446715305b0bbc36b016cfb7b1074 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:27:33.662509  593773 start.go:364] duration metric: took 435.986µs to acquireMachinesLock for "addons-785001"
	I0814 00:27:33.662546  593773 start.go:93] Provisioning new machine with config: &{Name:addons-785001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-785001 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0814 00:27:33.662643  593773 start.go:125] createHost starting for "" (driver="docker")
	I0814 00:27:33.665305  593773 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0814 00:27:33.665551  593773 start.go:159] libmachine.API.Create for "addons-785001" (driver="docker")
	I0814 00:27:33.665616  593773 client.go:168] LocalClient.Create starting
	I0814 00:27:33.665744  593773 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19429-587614/.minikube/certs/ca.pem
	I0814 00:27:33.956638  593773 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19429-587614/.minikube/certs/cert.pem
	I0814 00:27:34.717803  593773 cli_runner.go:164] Run: docker network inspect addons-785001 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 00:27:34.736825  593773 cli_runner.go:211] docker network inspect addons-785001 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 00:27:34.736921  593773 network_create.go:284] running [docker network inspect addons-785001] to gather additional debugging logs...
	I0814 00:27:34.736946  593773 cli_runner.go:164] Run: docker network inspect addons-785001
	W0814 00:27:34.752614  593773 cli_runner.go:211] docker network inspect addons-785001 returned with exit code 1
	I0814 00:27:34.752668  593773 network_create.go:287] error running [docker network inspect addons-785001]: docker network inspect addons-785001: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-785001 not found
	I0814 00:27:34.752689  593773 network_create.go:289] output of [docker network inspect addons-785001]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-785001 not found
	
	** /stderr **
	I0814 00:27:34.752791  593773 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 00:27:34.767949  593773 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400175c9c0}
	I0814 00:27:34.767991  593773 network_create.go:124] attempt to create docker network addons-785001 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0814 00:27:34.768053  593773 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-785001 addons-785001
	I0814 00:27:34.837732  593773 network_create.go:108] docker network addons-785001 192.168.49.0/24 created
	I0814 00:27:34.837765  593773 kic.go:121] calculated static IP "192.168.49.2" for the "addons-785001" container
	I0814 00:27:34.837835  593773 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0814 00:27:34.852996  593773 cli_runner.go:164] Run: docker volume create addons-785001 --label name.minikube.sigs.k8s.io=addons-785001 --label created_by.minikube.sigs.k8s.io=true
	I0814 00:27:34.870475  593773 oci.go:103] Successfully created a docker volume addons-785001
	I0814 00:27:34.870567  593773 cli_runner.go:164] Run: docker run --rm --name addons-785001-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-785001 --entrypoint /usr/bin/test -v addons-785001:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -d /var/lib
	I0814 00:27:36.828822  593773 cli_runner.go:217] Completed: docker run --rm --name addons-785001-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-785001 --entrypoint /usr/bin/test -v addons-785001:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -d /var/lib: (1.95821325s)
	I0814 00:27:36.828855  593773 oci.go:107] Successfully prepared a docker volume addons-785001
	I0814 00:27:36.828881  593773 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0814 00:27:36.828902  593773 kic.go:194] Starting extracting preloaded images to volume ...
	I0814 00:27:36.828997  593773 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19429-587614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-785001:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 00:27:40.988103  593773 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19429-587614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-785001:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -I lz4 -xf /preloaded.tar -C /extractDir: (4.159063881s)
	I0814 00:27:40.988161  593773 kic.go:203] duration metric: took 4.159257571s to extract preloaded images to volume ...
	W0814 00:27:40.988301  593773 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0814 00:27:40.988428  593773 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 00:27:41.049631  593773 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-785001 --name addons-785001 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-785001 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-785001 --network addons-785001 --ip 192.168.49.2 --volume addons-785001:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083
	I0814 00:27:41.339439  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Running}}
	I0814 00:27:41.366466  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:27:41.387431  593773 cli_runner.go:164] Run: docker exec addons-785001 stat /var/lib/dpkg/alternatives/iptables
	I0814 00:27:41.460710  593773 oci.go:144] the created container "addons-785001" has a running status.
	I0814 00:27:41.460737  593773 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa...
	I0814 00:27:41.734656  593773 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 00:27:41.766301  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:27:41.787793  593773 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 00:27:41.787817  593773 kic_runner.go:114] Args: [docker exec --privileged addons-785001 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 00:27:41.869476  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:27:41.892910  593773 machine.go:94] provisionDockerMachine start ...
	I0814 00:27:41.893016  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:27:41.916427  593773 main.go:141] libmachine: Using SSH client type: native
	I0814 00:27:41.916738  593773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0814 00:27:41.916754  593773 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 00:27:41.917389  593773 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46494->127.0.0.1:33508: read: connection reset by peer
	I0814 00:27:45.052376  593773 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-785001
	
	I0814 00:27:45.052460  593773 ubuntu.go:169] provisioning hostname "addons-785001"
	I0814 00:27:45.052578  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:27:45.084267  593773 main.go:141] libmachine: Using SSH client type: native
	I0814 00:27:45.084533  593773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0814 00:27:45.084549  593773 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-785001 && echo "addons-785001" | sudo tee /etc/hostname
	I0814 00:27:45.249208  593773 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-785001
	
	I0814 00:27:45.249305  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:27:45.279629  593773 main.go:141] libmachine: Using SSH client type: native
	I0814 00:27:45.279911  593773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0814 00:27:45.279947  593773 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-785001' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-785001/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-785001' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 00:27:45.414987  593773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 00:27:45.415021  593773 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19429-587614/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-587614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-587614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-587614/.minikube}
	I0814 00:27:45.415044  593773 ubuntu.go:177] setting up certificates
	I0814 00:27:45.415055  593773 provision.go:84] configureAuth start
	I0814 00:27:45.415121  593773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-785001
	I0814 00:27:45.437964  593773 provision.go:143] copyHostCerts
	I0814 00:27:45.438059  593773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-587614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-587614/.minikube/ca.pem (1078 bytes)
	I0814 00:27:45.438200  593773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-587614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-587614/.minikube/cert.pem (1123 bytes)
	I0814 00:27:45.438269  593773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-587614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-587614/.minikube/key.pem (1675 bytes)
	I0814 00:27:45.438350  593773 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-587614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-587614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-587614/.minikube/certs/ca-key.pem org=jenkins.addons-785001 san=[127.0.0.1 192.168.49.2 addons-785001 localhost minikube]
	I0814 00:27:46.026203  593773 provision.go:177] copyRemoteCerts
	I0814 00:27:46.026272  593773 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 00:27:46.026318  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:27:46.044005  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:27:46.135596  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 00:27:46.160160  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 00:27:46.185642  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 00:27:46.210167  593773 provision.go:87] duration metric: took 795.093098ms to configureAuth
	I0814 00:27:46.210195  593773 ubuntu.go:193] setting minikube options for container-runtime
	I0814 00:27:46.210383  593773 config.go:182] Loaded profile config "addons-785001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0814 00:27:46.210397  593773 machine.go:97] duration metric: took 4.317464212s to provisionDockerMachine
	I0814 00:27:46.210404  593773 client.go:171] duration metric: took 12.54477887s to LocalClient.Create
	I0814 00:27:46.210424  593773 start.go:167] duration metric: took 12.544875485s to libmachine.API.Create "addons-785001"
	I0814 00:27:46.210434  593773 start.go:293] postStartSetup for "addons-785001" (driver="docker")
	I0814 00:27:46.210444  593773 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 00:27:46.210512  593773 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 00:27:46.210558  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:27:46.226915  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:27:46.320283  593773 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 00:27:46.323459  593773 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 00:27:46.323505  593773 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 00:27:46.323516  593773 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 00:27:46.323523  593773 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0814 00:27:46.323534  593773 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-587614/.minikube/addons for local assets ...
	I0814 00:27:46.323607  593773 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-587614/.minikube/files for local assets ...
	I0814 00:27:46.323634  593773 start.go:296] duration metric: took 113.193828ms for postStartSetup
	I0814 00:27:46.323954  593773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-785001
	I0814 00:27:46.340788  593773 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/config.json ...
	I0814 00:27:46.341089  593773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:27:46.341153  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:27:46.358120  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:27:46.447400  593773 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0814 00:27:46.452012  593773 start.go:128] duration metric: took 12.789352383s to createHost
	I0814 00:27:46.452036  593773 start.go:83] releasing machines lock for "addons-785001", held for 12.789511029s
	I0814 00:27:46.452110  593773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-785001
	I0814 00:27:46.469155  593773 ssh_runner.go:195] Run: cat /version.json
	I0814 00:27:46.469183  593773 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 00:27:46.469213  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:27:46.469264  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:27:46.488893  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:27:46.500267  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:27:46.713004  593773 ssh_runner.go:195] Run: systemctl --version
	I0814 00:27:46.717229  593773 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0814 00:27:46.721345  593773 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0814 00:27:46.746538  593773 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0814 00:27:46.746623  593773 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 00:27:46.775402  593773 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0814 00:27:46.775424  593773 start.go:495] detecting cgroup driver to use...
	I0814 00:27:46.775456  593773 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0814 00:27:46.775514  593773 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0814 00:27:46.787741  593773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0814 00:27:46.799569  593773 docker.go:217] disabling cri-docker service (if available) ...
	I0814 00:27:46.799674  593773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 00:27:46.813766  593773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 00:27:46.828142  593773 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 00:27:46.908821  593773 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 00:27:47.000942  593773 docker.go:233] disabling docker service ...
	I0814 00:27:47.001084  593773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 00:27:47.024294  593773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 00:27:47.036304  593773 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 00:27:47.129698  593773 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 00:27:47.218559  593773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 00:27:47.229810  593773 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 00:27:47.247017  593773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0814 00:27:47.257506  593773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0814 00:27:47.267709  593773 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0814 00:27:47.267820  593773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0814 00:27:47.277939  593773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0814 00:27:47.288574  593773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0814 00:27:47.298525  593773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0814 00:27:47.308432  593773 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 00:27:47.317460  593773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0814 00:27:47.327353  593773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0814 00:27:47.337160  593773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0814 00:27:47.347227  593773 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 00:27:47.356116  593773 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 00:27:47.364726  593773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:27:47.449667  593773 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0814 00:27:47.599364  593773 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 00:27:47.599513  593773 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0814 00:27:47.603105  593773 start.go:563] Will wait 60s for crictl version
	I0814 00:27:47.603169  593773 ssh_runner.go:195] Run: which crictl
	I0814 00:27:47.606425  593773 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 00:27:47.643695  593773 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0814 00:27:47.643779  593773 ssh_runner.go:195] Run: containerd --version
	I0814 00:27:47.670018  593773 ssh_runner.go:195] Run: containerd --version
	I0814 00:27:47.698165  593773 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0814 00:27:47.700148  593773 cli_runner.go:164] Run: docker network inspect addons-785001 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 00:27:47.715556  593773 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0814 00:27:47.719293  593773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 00:27:47.730070  593773 kubeadm.go:883] updating cluster {Name:addons-785001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-785001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 00:27:47.730200  593773 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0814 00:27:47.730265  593773 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:27:47.765819  593773 containerd.go:627] all images are preloaded for containerd runtime.
	I0814 00:27:47.765841  593773 containerd.go:534] Images already preloaded, skipping extraction
	I0814 00:27:47.765900  593773 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:27:47.815020  593773 containerd.go:627] all images are preloaded for containerd runtime.
	I0814 00:27:47.815045  593773 cache_images.go:84] Images are preloaded, skipping loading
	I0814 00:27:47.815054  593773 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0814 00:27:47.815174  593773 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-785001 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-785001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 00:27:47.815247  593773 ssh_runner.go:195] Run: sudo crictl info
	I0814 00:27:47.859035  593773 cni.go:84] Creating CNI manager for ""
	I0814 00:27:47.859064  593773 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0814 00:27:47.859075  593773 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 00:27:47.859099  593773 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-785001 NodeName:addons-785001 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 00:27:47.859242  593773 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-785001"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 00:27:47.859317  593773 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 00:27:47.868648  593773 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 00:27:47.868751  593773 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 00:27:47.877840  593773 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0814 00:27:47.896409  593773 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 00:27:47.915116  593773 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0814 00:27:47.933541  593773 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0814 00:27:47.936968  593773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 00:27:47.947907  593773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:27:48.030013  593773 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 00:27:48.046428  593773 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001 for IP: 192.168.49.2
	I0814 00:27:48.046454  593773 certs.go:194] generating shared ca certs ...
	I0814 00:27:48.046499  593773 certs.go:226] acquiring lock for ca certs: {Name:mkdd3524330900d73112bf3446e8a8f051ebe9d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:48.046699  593773 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-587614/.minikube/ca.key
	I0814 00:27:48.287733  593773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-587614/.minikube/ca.crt ...
	I0814 00:27:48.287770  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/ca.crt: {Name:mk1bc793fe86651b2e6fa76380e88508e525e3a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:48.288508  593773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-587614/.minikube/ca.key ...
	I0814 00:27:48.288526  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/ca.key: {Name:mkb431f6d16de55ff1ebc5936cabff3959c0f6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:48.288970  593773 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-587614/.minikube/proxy-client-ca.key
	I0814 00:27:48.798649  593773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-587614/.minikube/proxy-client-ca.crt ...
	I0814 00:27:48.798686  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/proxy-client-ca.crt: {Name:mk39875055d80ec8be24b193742a5699d02ee039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:48.798880  593773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-587614/.minikube/proxy-client-ca.key ...
	I0814 00:27:48.798893  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/proxy-client-ca.key: {Name:mk884088b720e77f9a8b7e9419a8ce59a5862862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:48.798975  593773 certs.go:256] generating profile certs ...
	I0814 00:27:48.799035  593773 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.key
	I0814 00:27:48.799054  593773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt with IP's: []
	I0814 00:27:49.057728  593773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt ...
	I0814 00:27:49.057769  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: {Name:mk5fe6b76d253f70972dfa122712b6496d99a22b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:49.057956  593773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.key ...
	I0814 00:27:49.057969  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.key: {Name:mk3a86f426fa43d66d0e6ab345e50ede5975cecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:49.058081  593773 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.key.027856da
	I0814 00:27:49.058101  593773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.crt.027856da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0814 00:27:49.182665  593773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.crt.027856da ...
	I0814 00:27:49.182701  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.crt.027856da: {Name:mk2045b135b5eb5ec7e3771c1eab65b726f7151a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:49.183276  593773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.key.027856da ...
	I0814 00:27:49.183295  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.key.027856da: {Name:mk16e228b888de8ab6572e43f62dd40265790e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:49.183381  593773 certs.go:381] copying /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.crt.027856da -> /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.crt
	I0814 00:27:49.183463  593773 certs.go:385] copying /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.key.027856da -> /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.key
	I0814 00:27:49.183544  593773 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/proxy-client.key
	I0814 00:27:49.183565  593773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/proxy-client.crt with IP's: []
	I0814 00:27:49.427276  593773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/proxy-client.crt ...
	I0814 00:27:49.427309  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/proxy-client.crt: {Name:mk8104886af37f03b7f6c8e2a87c67fe69a04dee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:49.427972  593773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/proxy-client.key ...
	I0814 00:27:49.427990  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/proxy-client.key: {Name:mke76a6d153920d8857044080f4b2b38216f78fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:27:49.428791  593773 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-587614/.minikube/certs/ca-key.pem (1675 bytes)
	I0814 00:27:49.428835  593773 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-587614/.minikube/certs/ca.pem (1078 bytes)
	I0814 00:27:49.428861  593773 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-587614/.minikube/certs/cert.pem (1123 bytes)
	I0814 00:27:49.428922  593773 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-587614/.minikube/certs/key.pem (1675 bytes)
	I0814 00:27:49.429511  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 00:27:49.454482  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 00:27:49.480101  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 00:27:49.505401  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 00:27:49.531346  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0814 00:27:49.555556  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 00:27:49.581459  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 00:27:49.608817  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 00:27:49.635562  593773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-587614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 00:27:49.661310  593773 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 00:27:49.679562  593773 ssh_runner.go:195] Run: openssl version
	I0814 00:27:49.685137  593773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 00:27:49.695013  593773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:27:49.699628  593773 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:27:49.699772  593773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:27:49.707539  593773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 00:27:49.717539  593773 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 00:27:49.721041  593773 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 00:27:49.721133  593773 kubeadm.go:392] StartCluster: {Name:addons-785001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-785001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:27:49.721227  593773 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 00:27:49.721284  593773 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 00:27:49.758113  593773 cri.go:89] found id: ""
	I0814 00:27:49.758184  593773 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 00:27:49.767198  593773 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 00:27:49.776228  593773 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0814 00:27:49.776303  593773 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 00:27:49.785067  593773 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 00:27:49.785098  593773 kubeadm.go:157] found existing configuration files:
	
	I0814 00:27:49.785157  593773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 00:27:49.793848  593773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 00:27:49.793919  593773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 00:27:49.802928  593773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 00:27:49.811704  593773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 00:27:49.811767  593773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 00:27:49.820340  593773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 00:27:49.829234  593773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 00:27:49.829302  593773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 00:27:49.837912  593773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 00:27:49.846838  593773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 00:27:49.846903  593773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 00:27:49.855305  593773 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 00:27:49.895185  593773 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 00:27:49.895594  593773 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 00:27:49.914587  593773 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0814 00:27:49.914797  593773 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0814 00:27:49.914902  593773 kubeadm.go:310] OS: Linux
	I0814 00:27:49.914954  593773 kubeadm.go:310] CGROUPS_CPU: enabled
	I0814 00:27:49.915005  593773 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0814 00:27:49.915055  593773 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0814 00:27:49.915105  593773 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0814 00:27:49.915155  593773 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0814 00:27:49.915205  593773 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0814 00:27:49.915253  593773 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0814 00:27:49.915304  593773 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0814 00:27:49.915352  593773 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0814 00:27:49.971960  593773 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 00:27:49.972067  593773 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 00:27:49.972160  593773 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 00:27:49.977832  593773 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 00:27:49.982105  593773 out.go:204]   - Generating certificates and keys ...
	I0814 00:27:49.982305  593773 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 00:27:49.982422  593773 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 00:27:50.347427  593773 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 00:27:50.713198  593773 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 00:27:50.951360  593773 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 00:27:51.187200  593773 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 00:27:51.551638  593773 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 00:27:51.551861  593773 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-785001 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0814 00:27:52.081164  593773 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 00:27:52.081391  593773 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-785001 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0814 00:27:52.676507  593773 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 00:27:52.984274  593773 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 00:27:53.094947  593773 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 00:27:53.095227  593773 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 00:27:53.547209  593773 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 00:27:53.867013  593773 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 00:27:53.976044  593773 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 00:27:54.278906  593773 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 00:27:55.158175  593773 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 00:27:55.159137  593773 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 00:27:55.162373  593773 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 00:27:55.164575  593773 out.go:204]   - Booting up control plane ...
	I0814 00:27:55.164686  593773 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 00:27:55.164769  593773 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 00:27:55.165817  593773 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 00:27:55.177366  593773 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 00:27:55.184086  593773 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 00:27:55.184378  593773 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 00:27:55.277963  593773 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 00:27:55.278079  593773 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 00:27:57.779119  593773 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.501430914s
	I0814 00:27:57.779206  593773 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 00:28:04.781487  593773 kubeadm.go:310] [api-check] The API server is healthy after 7.002322932s
	I0814 00:28:04.800858  593773 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 00:28:04.814928  593773 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 00:28:04.841776  593773 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 00:28:04.841972  593773 kubeadm.go:310] [mark-control-plane] Marking the node addons-785001 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 00:28:04.853103  593773 kubeadm.go:310] [bootstrap-token] Using token: 5odstc.ud76hb8mkuoxb5ys
	I0814 00:28:04.858113  593773 out.go:204]   - Configuring RBAC rules ...
	I0814 00:28:04.858242  593773 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 00:28:04.871205  593773 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 00:28:04.896284  593773 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 00:28:04.900529  593773 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 00:28:04.905387  593773 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 00:28:04.909050  593773 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 00:28:05.189879  593773 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 00:28:05.612820  593773 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 00:28:06.190565  593773 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 00:28:06.191862  593773 kubeadm.go:310] 
	I0814 00:28:06.191941  593773 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 00:28:06.191952  593773 kubeadm.go:310] 
	I0814 00:28:06.192027  593773 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 00:28:06.192035  593773 kubeadm.go:310] 
	I0814 00:28:06.192067  593773 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 00:28:06.192127  593773 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 00:28:06.192180  593773 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 00:28:06.192188  593773 kubeadm.go:310] 
	I0814 00:28:06.192240  593773 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 00:28:06.192249  593773 kubeadm.go:310] 
	I0814 00:28:06.192296  593773 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 00:28:06.192304  593773 kubeadm.go:310] 
	I0814 00:28:06.192356  593773 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 00:28:06.192431  593773 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 00:28:06.192502  593773 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 00:28:06.192512  593773 kubeadm.go:310] 
	I0814 00:28:06.192594  593773 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 00:28:06.192669  593773 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 00:28:06.192676  593773 kubeadm.go:310] 
	I0814 00:28:06.192757  593773 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5odstc.ud76hb8mkuoxb5ys \
	I0814 00:28:06.192856  593773 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3118dd661631e3e81923b313f83537f90344d4b9adc12e5242bae363bf10aa34 \
	I0814 00:28:06.192876  593773 kubeadm.go:310] 	--control-plane 
	I0814 00:28:06.192880  593773 kubeadm.go:310] 
	I0814 00:28:06.192961  593773 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 00:28:06.192969  593773 kubeadm.go:310] 
	I0814 00:28:06.193047  593773 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5odstc.ud76hb8mkuoxb5ys \
	I0814 00:28:06.193144  593773 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3118dd661631e3e81923b313f83537f90344d4b9adc12e5242bae363bf10aa34 
	I0814 00:28:06.196170  593773 kubeadm.go:310] W0814 00:27:49.890938    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 00:28:06.196461  593773 kubeadm.go:310] W0814 00:27:49.892550    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 00:28:06.196669  593773 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0814 00:28:06.196777  593773 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 00:28:06.196796  593773 cni.go:84] Creating CNI manager for ""
	I0814 00:28:06.196820  593773 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0814 00:28:06.199391  593773 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 00:28:06.201316  593773 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0814 00:28:06.205396  593773 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0814 00:28:06.205420  593773 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0814 00:28:06.223660  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 00:28:06.523459  593773 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 00:28:06.523625  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 00:28:06.523722  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-785001 minikube.k8s.io/updated_at=2024_08_14T00_28_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=addons-785001 minikube.k8s.io/primary=true
	I0814 00:28:06.704420  593773 ops.go:34] apiserver oom_adj: -16
	I0814 00:28:06.704543  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 00:28:07.204719  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 00:28:07.704697  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 00:28:08.205137  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 00:28:08.704907  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 00:28:09.205350  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 00:28:09.705143  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 00:28:10.205390  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 00:28:10.705400  593773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 00:28:10.828844  593773 kubeadm.go:1113] duration metric: took 4.305276073s to wait for elevateKubeSystemPrivileges
	I0814 00:28:10.828870  593773 kubeadm.go:394] duration metric: took 21.107743347s to StartCluster
	I0814 00:28:10.828888  593773 settings.go:142] acquiring lock: {Name:mk71170cca656ddd1090ebe3d7c8f1d1292e0219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:28:10.829003  593773 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-587614/kubeconfig
	I0814 00:28:10.829401  593773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-587614/kubeconfig: {Name:mk6644aa2ccad9457eb20b7034ce6b50a6e41cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:28:10.830240  593773 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 00:28:10.830269  593773 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0814 00:28:10.830512  593773 config.go:182] Loaded profile config "addons-785001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0814 00:28:10.830551  593773 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0814 00:28:10.830630  593773 addons.go:69] Setting yakd=true in profile "addons-785001"
	I0814 00:28:10.830657  593773 addons.go:234] Setting addon yakd=true in "addons-785001"
	I0814 00:28:10.830697  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.831147  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.831566  593773 addons.go:69] Setting inspektor-gadget=true in profile "addons-785001"
	I0814 00:28:10.831605  593773 addons.go:234] Setting addon inspektor-gadget=true in "addons-785001"
	I0814 00:28:10.831642  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.832077  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.832351  593773 addons.go:69] Setting metrics-server=true in profile "addons-785001"
	I0814 00:28:10.832378  593773 addons.go:234] Setting addon metrics-server=true in "addons-785001"
	I0814 00:28:10.832403  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.832809  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.833314  593773 addons.go:69] Setting cloud-spanner=true in profile "addons-785001"
	I0814 00:28:10.833346  593773 addons.go:234] Setting addon cloud-spanner=true in "addons-785001"
	I0814 00:28:10.833380  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.833840  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.837292  593773 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-785001"
	I0814 00:28:10.837514  593773 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-785001"
	I0814 00:28:10.837752  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.844763  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.850467  593773 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-785001"
	I0814 00:28:10.850556  593773 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-785001"
	I0814 00:28:10.850587  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.851102  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.852794  593773 addons.go:69] Setting registry=true in profile "addons-785001"
	I0814 00:28:10.852840  593773 addons.go:234] Setting addon registry=true in "addons-785001"
	I0814 00:28:10.852874  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.853323  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.866785  593773 addons.go:69] Setting default-storageclass=true in profile "addons-785001"
	I0814 00:28:10.866841  593773 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-785001"
	I0814 00:28:10.867177  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.871022  593773 addons.go:69] Setting storage-provisioner=true in profile "addons-785001"
	I0814 00:28:10.871074  593773 addons.go:234] Setting addon storage-provisioner=true in "addons-785001"
	I0814 00:28:10.871112  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.871690  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.882144  593773 addons.go:69] Setting gcp-auth=true in profile "addons-785001"
	I0814 00:28:10.882196  593773 mustload.go:65] Loading cluster: addons-785001
	I0814 00:28:10.882390  593773 config.go:182] Loaded profile config "addons-785001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0814 00:28:10.882647  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.891230  593773 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-785001"
	I0814 00:28:10.891289  593773 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-785001"
	I0814 00:28:10.904243  593773 addons.go:69] Setting volcano=true in profile "addons-785001"
	I0814 00:28:10.904309  593773 addons.go:234] Setting addon volcano=true in "addons-785001"
	I0814 00:28:10.904348  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.908058  593773 addons.go:69] Setting ingress=true in profile "addons-785001"
	I0814 00:28:10.908101  593773 addons.go:234] Setting addon ingress=true in "addons-785001"
	I0814 00:28:10.908161  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.908369  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.908588  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.929761  593773 addons.go:69] Setting volumesnapshots=true in profile "addons-785001"
	I0814 00:28:10.929809  593773 addons.go:234] Setting addon volumesnapshots=true in "addons-785001"
	I0814 00:28:10.929851  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.930310  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.934413  593773 addons.go:69] Setting ingress-dns=true in profile "addons-785001"
	I0814 00:28:10.934455  593773 addons.go:234] Setting addon ingress-dns=true in "addons-785001"
	I0814 00:28:10.934501  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:10.934992  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:10.966793  593773 out.go:177] * Verifying Kubernetes components...
	I0814 00:28:10.990183  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:11.045259  593773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:28:11.048648  593773 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0814 00:28:11.050504  593773 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0814 00:28:11.052295  593773 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0814 00:28:11.057259  593773 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0814 00:28:11.059002  593773 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0814 00:28:11.061681  593773 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0814 00:28:11.061791  593773 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0814 00:28:11.064023  593773 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0814 00:28:11.063882  593773 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 00:28:11.064137  593773 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 00:28:11.064222  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.052307  593773 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0814 00:28:11.064835  593773 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0814 00:28:11.064888  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.067924  593773 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0814 00:28:11.050513  593773 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0814 00:28:11.069635  593773 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0814 00:28:11.069763  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.081507  593773 addons.go:234] Setting addon default-storageclass=true in "addons-785001"
	I0814 00:28:11.081549  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:11.082533  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:11.095159  593773 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0814 00:28:11.137056  593773 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0814 00:28:11.137145  593773 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0814 00:28:11.147030  593773 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0814 00:28:11.147087  593773 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0814 00:28:11.147176  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.147447  593773 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0814 00:28:11.147474  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0814 00:28:11.147516  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.154861  593773 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0814 00:28:11.155125  593773 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0814 00:28:11.155144  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0814 00:28:11.155210  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.156404  593773 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-785001"
	I0814 00:28:11.156447  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:11.156953  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:11.198468  593773 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 00:28:11.202311  593773 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0814 00:28:11.202320  593773 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0814 00:28:11.204093  593773 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 00:28:11.204482  593773 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0814 00:28:11.209276  593773 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0814 00:28:11.211291  593773 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 00:28:11.211302  593773 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0814 00:28:11.213117  593773 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0814 00:28:11.213141  593773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0814 00:28:11.213214  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.219863  593773 out.go:177]   - Using image docker.io/registry:2.8.3
	I0814 00:28:11.220111  593773 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 00:28:11.220127  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 00:28:11.220200  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.220200  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:11.223079  593773 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0814 00:28:11.225427  593773 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0814 00:28:11.225446  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0814 00:28:11.225508  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.240983  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.243993  593773 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0814 00:28:11.244020  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0814 00:28:11.245627  593773 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0814 00:28:11.245653  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0814 00:28:11.245724  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.262854  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.265035  593773 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 00:28:11.265055  593773 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 00:28:11.265108  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.280087  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.280464  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.281064  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.281516  593773 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0814 00:28:11.291154  593773 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0814 00:28:11.291175  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0814 00:28:11.291236  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.381681  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.403176  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.414703  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.418621  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.420558  593773 out.go:177]   - Using image docker.io/busybox:stable
	I0814 00:28:11.428483  593773 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0814 00:28:11.430944  593773 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0814 00:28:11.430964  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0814 00:28:11.431032  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:11.458874  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.464471  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.469975  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	W0814 00:28:11.476183  593773 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0814 00:28:11.476220  593773 retry.go:31] will retry after 142.118809ms: ssh: handshake failed: EOF
	I0814 00:28:11.485344  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.485893  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:11.522487  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	W0814 00:28:11.619748  593773 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0814 00:28:11.619777  593773 retry.go:31] will retry after 550.396407ms: ssh: handshake failed: EOF
	I0814 00:28:11.864881  593773 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.034604871s)
	I0814 00:28:11.865025  593773 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 00:28:11.865120  593773 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 00:28:11.913422  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0814 00:28:11.933975  593773 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0814 00:28:11.934047  593773 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0814 00:28:11.949750  593773 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0814 00:28:11.949828  593773 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0814 00:28:11.994759  593773 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0814 00:28:11.994842  593773 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0814 00:28:12.015525  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 00:28:12.109275  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0814 00:28:12.115296  593773 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 00:28:12.115323  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0814 00:28:12.121664  593773 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0814 00:28:12.121697  593773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0814 00:28:12.144285  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0814 00:28:12.218688  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 00:28:12.244492  593773 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0814 00:28:12.244520  593773 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0814 00:28:12.261017  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0814 00:28:12.265755  593773 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0814 00:28:12.265782  593773 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0814 00:28:12.272261  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0814 00:28:12.276698  593773 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0814 00:28:12.276723  593773 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0814 00:28:12.293103  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0814 00:28:12.313929  593773 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0814 00:28:12.313956  593773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0814 00:28:12.490809  593773 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 00:28:12.490832  593773 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 00:28:12.591300  593773 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0814 00:28:12.591329  593773 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0814 00:28:12.612339  593773 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0814 00:28:12.612373  593773 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0814 00:28:12.650627  593773 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0814 00:28:12.650652  593773 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0814 00:28:12.751961  593773 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0814 00:28:12.751986  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0814 00:28:12.772862  593773 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 00:28:12.772889  593773 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 00:28:12.776772  593773 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0814 00:28:12.776799  593773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0814 00:28:12.814231  593773 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0814 00:28:12.814259  593773 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0814 00:28:12.817300  593773 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0814 00:28:12.817324  593773 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0814 00:28:12.841950  593773 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0814 00:28:12.841977  593773 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0814 00:28:12.882785  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 00:28:12.885988  593773 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0814 00:28:12.886016  593773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0814 00:28:12.972263  593773 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0814 00:28:12.972287  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0814 00:28:12.979110  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0814 00:28:13.014047  593773 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 00:28:13.014072  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0814 00:28:13.025251  593773 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0814 00:28:13.025282  593773 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0814 00:28:13.073091  593773 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0814 00:28:13.073119  593773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0814 00:28:13.190986  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0814 00:28:13.205723  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 00:28:13.216729  593773 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0814 00:28:13.216756  593773 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0814 00:28:13.449699  593773 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0814 00:28:13.449729  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0814 00:28:13.518876  593773 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0814 00:28:13.518902  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0814 00:28:13.676308  593773 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0814 00:28:13.676334  593773 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0814 00:28:13.813675  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0814 00:28:13.927322  593773 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.062178231s)
	I0814 00:28:13.927405  593773 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.062365808s)
	I0814 00:28:13.927426  593773 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0814 00:28:13.927514  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.014010663s)
	I0814 00:28:13.927581  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.911971331s)
	I0814 00:28:13.929067  593773 node_ready.go:35] waiting up to 6m0s for node "addons-785001" to be "Ready" ...
	I0814 00:28:13.934707  593773 node_ready.go:49] node "addons-785001" has status "Ready":"True"
	I0814 00:28:13.934733  593773 node_ready.go:38] duration metric: took 5.635969ms for node "addons-785001" to be "Ready" ...
	I0814 00:28:13.934744  593773 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 00:28:13.954462  593773 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-7n4ht" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:13.966513  593773 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0814 00:28:13.966541  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0814 00:28:14.209648  593773 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0814 00:28:14.209673  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0814 00:28:14.430824  593773 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-785001" context rescaled to 1 replicas
	I0814 00:28:14.496617  593773 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0814 00:28:14.496644  593773 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0814 00:28:14.960925  593773 pod_ready.go:97] error getting pod "coredns-6f6b679f8f-7n4ht" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-7n4ht" not found
	I0814 00:28:14.960957  593773 pod_ready.go:81] duration metric: took 1.006460146s for pod "coredns-6f6b679f8f-7n4ht" in "kube-system" namespace to be "Ready" ...
	E0814 00:28:14.960996  593773 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-7n4ht" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-7n4ht" not found
	I0814 00:28:14.961010  593773 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-n7dhw" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:15.069034  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0814 00:28:15.860558  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.751246229s)
	I0814 00:28:16.972777  593773 pod_ready.go:102] pod "coredns-6f6b679f8f-n7dhw" in "kube-system" namespace has status "Ready":"False"
	I0814 00:28:18.431220  593773 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0814 00:28:18.431308  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:18.458850  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:18.964421  593773 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0814 00:28:18.989322  593773 pod_ready.go:102] pod "coredns-6f6b679f8f-n7dhw" in "kube-system" namespace has status "Ready":"False"
	I0814 00:28:19.115509  593773 addons.go:234] Setting addon gcp-auth=true in "addons-785001"
	I0814 00:28:19.115567  593773 host.go:66] Checking if "addons-785001" exists ...
	I0814 00:28:19.116028  593773 cli_runner.go:164] Run: docker container inspect addons-785001 --format={{.State.Status}}
	I0814 00:28:19.142945  593773 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0814 00:28:19.143006  593773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785001
	I0814 00:28:19.165841  593773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/addons-785001/id_rsa Username:docker}
	I0814 00:28:20.990608  593773 pod_ready.go:102] pod "coredns-6f6b679f8f-n7dhw" in "kube-system" namespace has status "Ready":"False"
	I0814 00:28:21.131415  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.987090331s)
	I0814 00:28:21.131508  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.912798241s)
	I0814 00:28:21.131541  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.870502284s)
	I0814 00:28:21.131579  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.859297627s)
	I0814 00:28:21.131669  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.838544067s)
	I0814 00:28:21.131681  593773 addons.go:475] Verifying addon ingress=true in "addons-785001"
	I0814 00:28:21.131953  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.249137496s)
	I0814 00:28:21.131976  593773 addons.go:475] Verifying addon metrics-server=true in "addons-785001"
	I0814 00:28:21.132037  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.152892976s)
	I0814 00:28:21.132122  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.941103685s)
	I0814 00:28:21.132151  593773 addons.go:475] Verifying addon registry=true in "addons-785001"
	I0814 00:28:21.132436  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.926677759s)
	W0814 00:28:21.133514  593773 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0814 00:28:21.133536  593773 retry.go:31] will retry after 227.611646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0814 00:28:21.132497  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.318794467s)
	I0814 00:28:21.133922  593773 out.go:177] * Verifying ingress addon...
	I0814 00:28:21.133981  593773 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-785001 service yakd-dashboard -n yakd-dashboard
	
	I0814 00:28:21.140807  593773 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0814 00:28:21.141014  593773 out.go:177] * Verifying registry addon...
	I0814 00:28:21.143774  593773 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0814 00:28:21.196356  593773 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0814 00:28:21.196381  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:21.196786  593773 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0814 00:28:21.196813  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:21.361953  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 00:28:21.649851  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:21.650978  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:22.021327  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.952241702s)
	I0814 00:28:22.021407  593773 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-785001"
	I0814 00:28:22.021595  593773 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.878624086s)
	I0814 00:28:22.023662  593773 out.go:177] * Verifying csi-hostpath-driver addon...
	I0814 00:28:22.023750  593773 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 00:28:22.027382  593773 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0814 00:28:22.029857  593773 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0814 00:28:22.031736  593773 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0814 00:28:22.031798  593773 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0814 00:28:22.060459  593773 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0814 00:28:22.060536  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:22.113379  593773 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0814 00:28:22.113452  593773 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0814 00:28:22.134670  593773 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0814 00:28:22.134773  593773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0814 00:28:22.146948  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:22.149026  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:22.230967  593773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0814 00:28:22.533287  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:22.645217  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:22.647331  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:22.979769  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.61776031s)
	I0814 00:28:23.033162  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:23.146154  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:23.231617  593773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.0005595s)
	I0814 00:28:23.234576  593773 addons.go:475] Verifying addon gcp-auth=true in "addons-785001"
	I0814 00:28:23.238002  593773 out.go:177] * Verifying gcp-auth addon...
	I0814 00:28:23.241906  593773 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0814 00:28:23.245482  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:23.245890  593773 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0814 00:28:23.467413  593773 pod_ready.go:102] pod "coredns-6f6b679f8f-n7dhw" in "kube-system" namespace has status "Ready":"False"
	I0814 00:28:23.532724  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:23.646064  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:23.649690  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:24.033453  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:24.146632  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:24.150228  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:24.532899  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:24.645978  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:24.649353  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:25.034569  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:25.148166  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:25.150895  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:25.469159  593773 pod_ready.go:102] pod "coredns-6f6b679f8f-n7dhw" in "kube-system" namespace has status "Ready":"False"
	I0814 00:28:25.533910  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:25.645648  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:25.648329  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:26.032874  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:26.145524  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:26.147198  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:26.532551  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:26.644907  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:26.648112  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:27.034116  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:27.145750  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:27.148030  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:27.532304  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:27.645695  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:27.647361  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:27.968704  593773 pod_ready.go:92] pod "coredns-6f6b679f8f-n7dhw" in "kube-system" namespace has status "Ready":"True"
	I0814 00:28:27.968727  593773 pod_ready.go:81] duration metric: took 13.007709189s for pod "coredns-6f6b679f8f-n7dhw" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:27.968739  593773 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-785001" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:27.976457  593773 pod_ready.go:92] pod "etcd-addons-785001" in "kube-system" namespace has status "Ready":"True"
	I0814 00:28:27.976481  593773 pod_ready.go:81] duration metric: took 7.735578ms for pod "etcd-addons-785001" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:27.976495  593773 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-785001" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:27.987240  593773 pod_ready.go:92] pod "kube-apiserver-addons-785001" in "kube-system" namespace has status "Ready":"True"
	I0814 00:28:27.987317  593773 pod_ready.go:81] duration metric: took 10.814021ms for pod "kube-apiserver-addons-785001" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:27.987345  593773 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-785001" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:27.994097  593773 pod_ready.go:92] pod "kube-controller-manager-addons-785001" in "kube-system" namespace has status "Ready":"True"
	I0814 00:28:27.994170  593773 pod_ready.go:81] duration metric: took 6.803743ms for pod "kube-controller-manager-addons-785001" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:27.994197  593773 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhs6l" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:28.000569  593773 pod_ready.go:92] pod "kube-proxy-zhs6l" in "kube-system" namespace has status "Ready":"True"
	I0814 00:28:28.000659  593773 pod_ready.go:81] duration metric: took 6.431363ms for pod "kube-proxy-zhs6l" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:28.000685  593773 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-785001" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:28.033076  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:28.146190  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:28.148009  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:28.365720  593773 pod_ready.go:92] pod "kube-scheduler-addons-785001" in "kube-system" namespace has status "Ready":"True"
	I0814 00:28:28.365792  593773 pod_ready.go:81] duration metric: took 365.084899ms for pod "kube-scheduler-addons-785001" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:28.365822  593773 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-g9jrt" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:28.534054  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:28.655426  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:28.660654  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:29.033253  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:29.145428  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:29.147645  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:29.532144  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:29.645419  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:29.648412  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:30.035303  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:30.146122  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:30.149675  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:30.371508  593773 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-g9jrt" in "kube-system" namespace has status "Ready":"False"
	I0814 00:28:30.532649  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:30.650322  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:30.650478  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:31.033776  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:31.148498  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:31.149905  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:31.533409  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:31.645150  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:31.648675  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:32.036108  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:32.146023  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:32.149059  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:32.373907  593773 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-g9jrt" in "kube-system" namespace has status "Ready":"False"
	I0814 00:28:32.532415  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:32.647714  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:32.648370  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:33.033337  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:33.147547  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:33.150119  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:33.532608  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:33.646215  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:33.648468  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:34.032643  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:34.146033  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:34.148290  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:34.532357  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:34.645883  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:34.648025  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:34.872679  593773 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-g9jrt" in "kube-system" namespace has status "Ready":"False"
	I0814 00:28:35.032832  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:35.144630  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:35.148172  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:35.533296  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:35.645382  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:35.648068  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:36.034236  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:36.146232  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:36.148877  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:36.373124  593773 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-g9jrt" in "kube-system" namespace has status "Ready":"True"
	I0814 00:28:36.373152  593773 pod_ready.go:81] duration metric: took 8.007314813s for pod "nvidia-device-plugin-daemonset-g9jrt" in "kube-system" namespace to be "Ready" ...
	I0814 00:28:36.373162  593773 pod_ready.go:38] duration metric: took 22.438389104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 00:28:36.373178  593773 api_server.go:52] waiting for apiserver process to appear ...
	I0814 00:28:36.373248  593773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 00:28:36.386800  593773 api_server.go:72] duration metric: took 25.556496055s to wait for apiserver process to appear ...
	I0814 00:28:36.386825  593773 api_server.go:88] waiting for apiserver healthz status ...
	I0814 00:28:36.386844  593773 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0814 00:28:36.394204  593773 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0814 00:28:36.395239  593773 api_server.go:141] control plane version: v1.31.0
	I0814 00:28:36.395262  593773 api_server.go:131] duration metric: took 8.430959ms to wait for apiserver health ...
	I0814 00:28:36.395271  593773 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 00:28:36.404698  593773 system_pods.go:59] 18 kube-system pods found
	I0814 00:28:36.404737  593773 system_pods.go:61] "coredns-6f6b679f8f-n7dhw" [4cce9dcd-c25f-4d6a-a1c6-70a9205a0396] Running
	I0814 00:28:36.404749  593773 system_pods.go:61] "csi-hostpath-attacher-0" [e9c933aa-7e81-4e5b-a5e0-195daee31021] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0814 00:28:36.404759  593773 system_pods.go:61] "csi-hostpath-resizer-0" [3dfb3b18-9661-420a-b521-52580b5c8678] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0814 00:28:36.404770  593773 system_pods.go:61] "csi-hostpathplugin-92t54" [d8bba10e-8dc4-4b55-a042-2fe9661a6eff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0814 00:28:36.404781  593773 system_pods.go:61] "etcd-addons-785001" [e7b1c84f-245a-42ad-a84f-44b60989fcc3] Running
	I0814 00:28:36.404785  593773 system_pods.go:61] "kindnet-qqc2d" [cb5ba9e6-fd18-464a-b282-1874d71f31b1] Running
	I0814 00:28:36.404792  593773 system_pods.go:61] "kube-apiserver-addons-785001" [06367200-16c8-49a6-a6c2-b425cbd15b1e] Running
	I0814 00:28:36.404797  593773 system_pods.go:61] "kube-controller-manager-addons-785001" [cef2cf24-2717-40f9-a516-976b4928a66c] Running
	I0814 00:28:36.404804  593773 system_pods.go:61] "kube-ingress-dns-minikube" [8af119e1-6fe3-417f-b595-a850a21a72bd] Running
	I0814 00:28:36.404807  593773 system_pods.go:61] "kube-proxy-zhs6l" [009d55cf-a6a5-4827-9490-9bf2f52d89c8] Running
	I0814 00:28:36.404811  593773 system_pods.go:61] "kube-scheduler-addons-785001" [d29d6052-ea61-459f-897d-dd2cc06d17db] Running
	I0814 00:28:36.404817  593773 system_pods.go:61] "metrics-server-8988944d9-25gt9" [d49d2e62-4ffd-4d71-b136-92dacc67f2e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 00:28:36.404826  593773 system_pods.go:61] "nvidia-device-plugin-daemonset-g9jrt" [cfa65ea4-62b7-4d0a-9676-26c484d6665c] Running
	I0814 00:28:36.404832  593773 system_pods.go:61] "registry-6fb4cdfc84-jx2zj" [3e30be30-2743-4a1e-9993-0f606c2b2940] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0814 00:28:36.404840  593773 system_pods.go:61] "registry-proxy-l6r74" [49927f73-0585-451a-a297-1a69cbe7f1ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0814 00:28:36.404860  593773 system_pods.go:61] "snapshot-controller-56fcc65765-95c54" [d56d242b-1fe8-45d1-9f75-11652badea12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0814 00:28:36.404868  593773 system_pods.go:61] "snapshot-controller-56fcc65765-x64dv" [ac96a82f-9431-469b-bc52-820c968e2761] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0814 00:28:36.404875  593773 system_pods.go:61] "storage-provisioner" [c308318a-6a9b-4b3d-a8dc-82e52ab9bd28] Running
	I0814 00:28:36.404882  593773 system_pods.go:74] duration metric: took 9.604731ms to wait for pod list to return data ...
	I0814 00:28:36.404893  593773 default_sa.go:34] waiting for default service account to be created ...
	I0814 00:28:36.407486  593773 default_sa.go:45] found service account: "default"
	I0814 00:28:36.407511  593773 default_sa.go:55] duration metric: took 2.611368ms for default service account to be created ...
	I0814 00:28:36.407520  593773 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 00:28:36.416494  593773 system_pods.go:86] 18 kube-system pods found
	I0814 00:28:36.416533  593773 system_pods.go:89] "coredns-6f6b679f8f-n7dhw" [4cce9dcd-c25f-4d6a-a1c6-70a9205a0396] Running
	I0814 00:28:36.416544  593773 system_pods.go:89] "csi-hostpath-attacher-0" [e9c933aa-7e81-4e5b-a5e0-195daee31021] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0814 00:28:36.416585  593773 system_pods.go:89] "csi-hostpath-resizer-0" [3dfb3b18-9661-420a-b521-52580b5c8678] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0814 00:28:36.416604  593773 system_pods.go:89] "csi-hostpathplugin-92t54" [d8bba10e-8dc4-4b55-a042-2fe9661a6eff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0814 00:28:36.416610  593773 system_pods.go:89] "etcd-addons-785001" [e7b1c84f-245a-42ad-a84f-44b60989fcc3] Running
	I0814 00:28:36.416621  593773 system_pods.go:89] "kindnet-qqc2d" [cb5ba9e6-fd18-464a-b282-1874d71f31b1] Running
	I0814 00:28:36.416625  593773 system_pods.go:89] "kube-apiserver-addons-785001" [06367200-16c8-49a6-a6c2-b425cbd15b1e] Running
	I0814 00:28:36.416630  593773 system_pods.go:89] "kube-controller-manager-addons-785001" [cef2cf24-2717-40f9-a516-976b4928a66c] Running
	I0814 00:28:36.416650  593773 system_pods.go:89] "kube-ingress-dns-minikube" [8af119e1-6fe3-417f-b595-a850a21a72bd] Running
	I0814 00:28:36.416662  593773 system_pods.go:89] "kube-proxy-zhs6l" [009d55cf-a6a5-4827-9490-9bf2f52d89c8] Running
	I0814 00:28:36.416667  593773 system_pods.go:89] "kube-scheduler-addons-785001" [d29d6052-ea61-459f-897d-dd2cc06d17db] Running
	I0814 00:28:36.416673  593773 system_pods.go:89] "metrics-server-8988944d9-25gt9" [d49d2e62-4ffd-4d71-b136-92dacc67f2e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 00:28:36.416682  593773 system_pods.go:89] "nvidia-device-plugin-daemonset-g9jrt" [cfa65ea4-62b7-4d0a-9676-26c484d6665c] Running
	I0814 00:28:36.416689  593773 system_pods.go:89] "registry-6fb4cdfc84-jx2zj" [3e30be30-2743-4a1e-9993-0f606c2b2940] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0814 00:28:36.416696  593773 system_pods.go:89] "registry-proxy-l6r74" [49927f73-0585-451a-a297-1a69cbe7f1ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0814 00:28:36.416705  593773 system_pods.go:89] "snapshot-controller-56fcc65765-95c54" [d56d242b-1fe8-45d1-9f75-11652badea12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0814 00:28:36.416728  593773 system_pods.go:89] "snapshot-controller-56fcc65765-x64dv" [ac96a82f-9431-469b-bc52-820c968e2761] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0814 00:28:36.416740  593773 system_pods.go:89] "storage-provisioner" [c308318a-6a9b-4b3d-a8dc-82e52ab9bd28] Running
	I0814 00:28:36.416748  593773 system_pods.go:126] duration metric: took 9.222471ms to wait for k8s-apps to be running ...
	I0814 00:28:36.416760  593773 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 00:28:36.416829  593773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:28:36.429270  593773 system_svc.go:56] duration metric: took 12.499667ms WaitForService to wait for kubelet
	I0814 00:28:36.429300  593773 kubeadm.go:582] duration metric: took 25.599001366s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 00:28:36.429325  593773 node_conditions.go:102] verifying NodePressure condition ...
	I0814 00:28:36.432238  593773 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0814 00:28:36.432269  593773 node_conditions.go:123] node cpu capacity is 2
	I0814 00:28:36.432295  593773 node_conditions.go:105] duration metric: took 2.963785ms to run NodePressure ...
	I0814 00:28:36.432307  593773 start.go:241] waiting for startup goroutines ...
	I0814 00:28:36.532136  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:36.645118  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:36.647044  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:37.033956  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:37.146562  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:37.149062  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:37.533053  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:37.646364  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:37.650537  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:38.035431  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:38.150505  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:38.153749  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:38.533371  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:38.645832  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:38.649033  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:39.032661  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:39.146133  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:39.149772  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:39.533752  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:39.648485  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:39.650538  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:40.051386  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:40.154406  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:40.156002  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:40.532450  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:40.659692  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:40.660765  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:41.033236  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:41.146973  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:41.148876  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:41.532121  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:41.646803  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:41.647818  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:42.033108  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:42.148268  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:42.151120  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:42.532743  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:42.644822  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:42.647771  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:43.032265  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:43.145848  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:43.148685  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:43.532345  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:43.645403  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:43.648558  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:44.034321  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:44.147354  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:44.148978  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:44.533762  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:44.647435  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:44.649887  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:45.035650  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:45.155398  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:45.157792  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:45.539761  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:45.646998  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:45.651080  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:46.033969  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:46.145405  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:46.146816  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:46.531925  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:46.645201  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:46.646897  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:47.034241  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:47.146288  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:47.149369  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:47.532072  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:47.645094  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:47.648029  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:48.032569  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:48.145073  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:48.146567  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:48.532168  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:48.645267  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:48.646910  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:49.033400  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:49.145823  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:49.147833  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:49.533562  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:49.645669  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:49.647551  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:50.035085  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:50.144749  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:50.148055  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:50.532348  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:50.645252  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:50.648086  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:51.035766  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:51.147031  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:51.149130  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:51.531887  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:51.645022  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:51.646948  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:52.033198  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:52.146691  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:52.149077  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:52.535266  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:52.645238  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:52.646798  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:53.032443  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:53.146244  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:53.149806  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:53.535947  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:53.648380  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:53.648960  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 00:28:54.032725  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:54.145440  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:54.148783  593773 kapi.go:107] duration metric: took 33.005004898s to wait for kubernetes.io/minikube-addons=registry ...
	I0814 00:28:54.532724  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:54.645020  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:55.032947  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:55.147320  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:55.532728  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:55.646980  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:56.033023  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:56.145407  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:56.533188  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:56.645865  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:57.033156  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:57.145919  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:57.534155  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:57.673552  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:58.032575  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:58.147346  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:58.532089  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:58.646330  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:59.032211  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:59.145756  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:28:59.532586  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:28:59.645507  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:00.042320  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:00.174256  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:00.533012  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:00.645983  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:01.032632  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:01.145229  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:01.532298  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:01.645933  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:02.032696  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:02.158278  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:02.532495  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:02.646036  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:03.033755  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:03.151510  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:03.533198  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:03.645593  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:04.032679  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:04.145800  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:04.538374  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:04.645767  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:05.033506  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:05.145819  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:05.534759  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:05.645348  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:06.032104  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:06.145668  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:06.533020  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:06.645997  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:07.033536  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:07.145691  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:07.532532  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:07.645929  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:08.032367  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:08.145568  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:08.533291  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:08.645998  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:09.032803  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:09.145348  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:09.532672  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:09.645566  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:10.033748  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:10.145011  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:10.533149  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:10.645660  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:11.032648  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:11.146077  593773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 00:29:11.533513  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:11.646422  593773 kapi.go:107] duration metric: took 50.505613195s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0814 00:29:12.036992  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:12.533677  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:13.033479  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:13.533246  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:14.032764  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:14.533422  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:15.034038  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:15.537168  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:16.032282  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:16.532228  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:17.032328  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:17.532548  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:18.032471  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:18.532303  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:19.032614  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:19.532878  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:20.033128  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 00:29:20.532378  593773 kapi.go:107] duration metric: took 58.504994128s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0814 00:29:45.254366  593773 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0814 00:29:45.254627  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:45.746113  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:46.245387  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:46.745449  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:47.245985  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:47.746014  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:48.245348  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:48.745479  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:49.245751  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:49.746551  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:50.245233  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:50.746383  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:51.245699  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:51.745752  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:52.246103  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:52.745715  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:53.245542  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:53.745352  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:54.245830  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:54.745912  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:55.245146  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:55.746478  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:56.245282  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:56.745322  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:57.246076  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:57.746124  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:58.245599  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:58.746058  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:59.245722  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:29:59.745713  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:00.257994  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:00.746856  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:01.247899  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:01.745855  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:02.245099  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:02.746157  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:03.245958  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:03.745777  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:04.246338  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:04.745236  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:05.245991  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:05.745366  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:06.245381  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:06.746283  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:07.246251  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:07.745477  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:08.245574  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:08.745290  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:09.246188  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:09.746423  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:10.245116  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:10.746033  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:11.245453  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:11.745860  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:12.246096  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:12.745839  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:13.245290  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:13.746283  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:14.245795  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:14.745646  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:15.245797  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:15.745380  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:16.245959  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:16.745407  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:17.246263  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:17.745375  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:18.245889  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:18.745194  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:19.245998  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:19.746112  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:20.245402  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:20.745735  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:21.245799  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:21.745915  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:22.245409  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:22.745041  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:23.245238  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:23.746064  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:24.245466  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:24.748070  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:25.246106  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:25.745687  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:26.247673  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:26.745505  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:27.245997  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:27.746053  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:28.247655  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:28.745385  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:29.246562  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:29.748375  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:30.245383  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:30.746210  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:31.245839  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:31.745967  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:32.245512  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:32.745874  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:33.246033  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:33.746264  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:34.245651  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:34.746149  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:35.244932  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:35.746307  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:36.248155  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:36.750915  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:37.245984  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:37.745480  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:38.245307  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:38.745230  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:39.246290  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:39.746178  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:40.245761  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:40.745955  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:41.245727  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:41.745755  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:42.245400  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:42.746151  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:43.245805  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:43.745615  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:44.245692  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:44.746358  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:45.246903  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:45.745429  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:46.245406  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:46.745383  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:47.246388  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:47.746400  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:48.246529  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:48.745044  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:49.245867  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:49.745779  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:50.246491  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:50.745529  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:51.248761  593773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 00:30:51.745619  593773 kapi.go:107] duration metric: took 2m28.503711811s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0814 00:30:51.747352  593773 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-785001 cluster.
	I0814 00:30:51.749125  593773 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0814 00:30:51.750661  593773 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0814 00:30:51.752678  593773 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, volcano, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0814 00:30:51.754303  593773 addons.go:510] duration metric: took 2m40.923749619s for enable addons: enabled=[nvidia-device-plugin default-storageclass storage-provisioner-rancher volcano storage-provisioner cloud-spanner ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0814 00:30:51.754360  593773 start.go:246] waiting for cluster config update ...
	I0814 00:30:51.754387  593773 start.go:255] writing updated cluster config ...
	I0814 00:30:51.754752  593773 ssh_runner.go:195] Run: rm -f paused
	I0814 00:30:52.129373  593773 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 00:30:52.131314  593773 out.go:177] * Done! kubectl is now configured to use "addons-785001" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	f04c7e98f337b       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   724f91f5a8e9a       gadget-kjrnf
	d20f57f75dfa3       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   49ec11745e6dc       gcp-auth-89d5ffd79-zjrc4
	39b50a7dda9e3       8b46b1cd48760       4 minutes ago       Running             admission                                0                   7ad934b1ded31       volcano-admission-77d7d48b68-2s99n
	6feabadd4db28       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   4163e45eb5f2a       csi-hostpathplugin-92t54
	2e073fece92bf       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   4163e45eb5f2a       csi-hostpathplugin-92t54
	4c76763660f42       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   4163e45eb5f2a       csi-hostpathplugin-92t54
	f38db30038c05       08f6b2990811a       4 minutes ago       Running             hostpath                                 0                   4163e45eb5f2a       csi-hostpathplugin-92t54
	a9afcd98553f7       0107d56dbc0be       4 minutes ago       Running             node-driver-registrar                    0                   4163e45eb5f2a       csi-hostpathplugin-92t54
	6267061765557       24f8f979639f1       4 minutes ago       Running             controller                               0                   b2cd52108cd7e       ingress-nginx-controller-7559cbf597-9grqv
	160dfa02f1c5a       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   25b8f4a536459       csi-hostpath-attacher-0
	e2314dc4661c8       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   87fc6b3aa0c73       csi-hostpath-resizer-0
	ae0f2cf2b2675       296b5f799fcd8       5 minutes ago       Exited              patch                                    0                   1cd3f6b6d02d6       ingress-nginx-admission-patch-94gp5
	9515838c8c091       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   73c98a2e945f7       volcano-scheduler-576bc46687-7m9zn
	81e73643a3e52       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   8b80cde192c23       snapshot-controller-56fcc65765-95c54
	e0474c2051c34       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   4163e45eb5f2a       csi-hostpathplugin-92t54
	41ffaeae99ff8       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   67a3506d06a54       snapshot-controller-56fcc65765-x64dv
	be49809937a70       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   e896ab86478a7       volcano-controllers-56675bb4d5-8q2tt
	14a872dbf099f       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   4013327c18d04       registry-proxy-l6r74
	afd9af4d7c51b       296b5f799fcd8       5 minutes ago       Exited              create                                   0                   c9eac727c3786       ingress-nginx-admission-create-jtcz4
	5204cf61f0b6d       77bdba588b953       5 minutes ago       Running             yakd                                     0                   60e75789b885a       yakd-dashboard-67d98fc6b-kn98c
	83a5ac8b1bdf0       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   e9e185d759cbc       metrics-server-8988944d9-25gt9
	e4d92b62d7315       6fed88f43b276       5 minutes ago       Running             registry                                 0                   7a59a8520ed3b       registry-6fb4cdfc84-jx2zj
	871b733aa5aae       53af6e2c4c343       5 minutes ago       Running             cloud-spanner-emulator                   0                   1b078ecd013c1       cloud-spanner-emulator-c4bc9b5f8-x9t4d
	4eaabcff0f66c       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   436ccd51b1622       local-path-provisioner-86d989889c-l6dcc
	eb922f9ecc102       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   e2d216a8bcf90       nvidia-device-plugin-daemonset-g9jrt
	439a02c6dbe3e       2437cf7621777       5 minutes ago       Running             coredns                                  0                   d9ed0a1ffaf13       coredns-6f6b679f8f-n7dhw
	c1fa9cb0d34bf       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   3ebcfb775d4ab       kube-ingress-dns-minikube
	fd8d28a8e68f0       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   3da717c199d7a       storage-provisioner
	678dc82d7b6c1       71d55d66fd4ee       5 minutes ago       Running             kube-proxy                               0                   c01c6af0b0490       kube-proxy-zhs6l
	ab529dab7002a       d5e283bc63d43       5 minutes ago       Running             kindnet-cni                              0                   6cc139803cdb1       kindnet-qqc2d
	486c5482558d4       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   2a9d8c4a5a67c       kube-apiserver-addons-785001
	77a7bed0ed107       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   82b389d16fe89       kube-scheduler-addons-785001
	b6c0e5f68eaab       27e3830e14027       6 minutes ago       Running             etcd                                     0                   86256b95c993a       etcd-addons-785001
	df3c09579724d       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   2a51f97196a21       kube-controller-manager-addons-785001
	
	
	==> containerd <==
	Aug 14 00:31:05 addons-785001 containerd[813]: time="2024-08-14T00:31:05.636867322Z" level=info msg="RemovePodSandbox \"79a5ee54ac6809de06472090a5692f452d2b84f41d45b6f0d89777c039679db7\" returns successfully"
	Aug 14 00:32:04 addons-785001 containerd[813]: time="2024-08-14T00:32:04.610205518Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\""
	Aug 14 00:32:04 addons-785001 containerd[813]: time="2024-08-14T00:32:04.739455964Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: active requests=0, bytes read=89"
	Aug 14 00:32:04 addons-785001 containerd[813]: time="2024-08-14T00:32:04.739486947Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 14 00:32:04 addons-785001 containerd[813]: time="2024-08-14T00:32:04.743071710Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" with image id \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\", size \"69907666\" in 132.803128ms"
	Aug 14 00:32:04 addons-785001 containerd[813]: time="2024-08-14T00:32:04.743255069Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" returns image reference \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\""
	Aug 14 00:32:04 addons-785001 containerd[813]: time="2024-08-14T00:32:04.745823600Z" level=info msg="CreateContainer within sandbox \"724f91f5a8e9a4ea4ee632be33f758ba060e10d5c74d6d2b0c701fa2119265b1\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 14 00:32:04 addons-785001 containerd[813]: time="2024-08-14T00:32:04.764981291Z" level=info msg="CreateContainer within sandbox \"724f91f5a8e9a4ea4ee632be33f758ba060e10d5c74d6d2b0c701fa2119265b1\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91\""
	Aug 14 00:32:04 addons-785001 containerd[813]: time="2024-08-14T00:32:04.765545744Z" level=info msg="StartContainer for \"f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91\""
	Aug 14 00:32:04 addons-785001 containerd[813]: time="2024-08-14T00:32:04.826097163Z" level=info msg="StartContainer for \"f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91\" returns successfully"
	Aug 14 00:32:05 addons-785001 containerd[813]: time="2024-08-14T00:32:05.640999845Z" level=info msg="RemoveContainer for \"ef5dbbddb5a3fbb367a22202f859a9d0735ca91beec814248d65ab8468acc1c9\""
	Aug 14 00:32:05 addons-785001 containerd[813]: time="2024-08-14T00:32:05.648007182Z" level=info msg="RemoveContainer for \"ef5dbbddb5a3fbb367a22202f859a9d0735ca91beec814248d65ab8468acc1c9\" returns successfully"
	Aug 14 00:32:05 addons-785001 containerd[813]: time="2024-08-14T00:32:05.651063357Z" level=info msg="StopPodSandbox for \"48ae4bf256225d057585b445cf96517701de8a18874608ab96b60a4f5f72efd3\""
	Aug 14 00:32:05 addons-785001 containerd[813]: time="2024-08-14T00:32:05.688071964Z" level=info msg="TearDown network for sandbox \"48ae4bf256225d057585b445cf96517701de8a18874608ab96b60a4f5f72efd3\" successfully"
	Aug 14 00:32:05 addons-785001 containerd[813]: time="2024-08-14T00:32:05.688112325Z" level=info msg="StopPodSandbox for \"48ae4bf256225d057585b445cf96517701de8a18874608ab96b60a4f5f72efd3\" returns successfully"
	Aug 14 00:32:05 addons-785001 containerd[813]: time="2024-08-14T00:32:05.690905240Z" level=info msg="RemovePodSandbox for \"48ae4bf256225d057585b445cf96517701de8a18874608ab96b60a4f5f72efd3\""
	Aug 14 00:32:05 addons-785001 containerd[813]: time="2024-08-14T00:32:05.690956374Z" level=info msg="Forcibly stopping sandbox \"48ae4bf256225d057585b445cf96517701de8a18874608ab96b60a4f5f72efd3\""
	Aug 14 00:32:05 addons-785001 containerd[813]: time="2024-08-14T00:32:05.727794471Z" level=info msg="TearDown network for sandbox \"48ae4bf256225d057585b445cf96517701de8a18874608ab96b60a4f5f72efd3\" successfully"
	Aug 14 00:32:05 addons-785001 containerd[813]: time="2024-08-14T00:32:05.747192473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48ae4bf256225d057585b445cf96517701de8a18874608ab96b60a4f5f72efd3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 14 00:32:05 addons-785001 containerd[813]: time="2024-08-14T00:32:05.747339082Z" level=info msg="RemovePodSandbox \"48ae4bf256225d057585b445cf96517701de8a18874608ab96b60a4f5f72efd3\" returns successfully"
	Aug 14 00:32:06 addons-785001 containerd[813]: time="2024-08-14T00:32:06.177465174Z" level=info msg="shim disconnected" id=f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91 namespace=k8s.io
	Aug 14 00:32:06 addons-785001 containerd[813]: time="2024-08-14T00:32:06.177589628Z" level=warning msg="cleaning up after shim disconnected" id=f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91 namespace=k8s.io
	Aug 14 00:32:06 addons-785001 containerd[813]: time="2024-08-14T00:32:06.177601288Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 14 00:32:06 addons-785001 containerd[813]: time="2024-08-14T00:32:06.781253863Z" level=info msg="RemoveContainer for \"34c3c5549ca787d4ba409b0db30e183510c423be44631d5a4df6472af6c1e4a7\""
	Aug 14 00:32:06 addons-785001 containerd[813]: time="2024-08-14T00:32:06.793477341Z" level=info msg="RemoveContainer for \"34c3c5549ca787d4ba409b0db30e183510c423be44631d5a4df6472af6c1e4a7\" returns successfully"
	
	
	==> coredns [439a02c6dbe3e9022b62f48659c578e914967664be40b58e759209004b39aec7] <==
	[INFO] 10.244.0.10:60166 - 33380 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099117s
	[INFO] 10.244.0.10:55386 - 46475 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002614724s
	[INFO] 10.244.0.10:55386 - 24717 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002451386s
	[INFO] 10.244.0.10:35078 - 49144 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122387s
	[INFO] 10.244.0.10:35078 - 27642 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000057264s
	[INFO] 10.244.0.10:43043 - 42805 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105394s
	[INFO] 10.244.0.10:43043 - 19753 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000225369s
	[INFO] 10.244.0.10:38783 - 48908 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000043873s
	[INFO] 10.244.0.10:38783 - 37646 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044677s
	[INFO] 10.244.0.10:43944 - 45517 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041469s
	[INFO] 10.244.0.10:43944 - 60355 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047458s
	[INFO] 10.244.0.10:41039 - 59012 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002818801s
	[INFO] 10.244.0.10:41039 - 49286 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00311498s
	[INFO] 10.244.0.10:35365 - 33810 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000073961s
	[INFO] 10.244.0.10:35365 - 49935 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043053s
	[INFO] 10.244.0.24:33016 - 46684 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002159884s
	[INFO] 10.244.0.24:34985 - 23195 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002370016s
	[INFO] 10.244.0.24:47182 - 20234 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124775s
	[INFO] 10.244.0.24:57381 - 13629 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00015428s
	[INFO] 10.244.0.24:51140 - 38874 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105526s
	[INFO] 10.244.0.24:59438 - 32339 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109637s
	[INFO] 10.244.0.24:54402 - 13657 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002115839s
	[INFO] 10.244.0.24:52299 - 47894 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002193983s
	[INFO] 10.244.0.24:41943 - 5521 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000957394s
	[INFO] 10.244.0.24:32864 - 9847 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001041309s
	
	
	==> describe nodes <==
	Name:               addons-785001
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-785001
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=addons-785001
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T00_28_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-785001
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-785001"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:28:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-785001
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:34:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:31:09 +0000   Wed, 14 Aug 2024 00:27:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:31:09 +0000   Wed, 14 Aug 2024 00:27:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:31:09 +0000   Wed, 14 Aug 2024 00:27:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:31:09 +0000   Wed, 14 Aug 2024 00:28:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-785001
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc6625c757014b988512caf8db203632
	  System UUID:                2b041d59-c814-4494-8b67-30d83bfa88c4
	  Boot ID:                    c683155d-ca4f-4aaf-b294-1781f14fe058
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-x9t4d       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  gadget                      gadget-kjrnf                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  gcp-auth                    gcp-auth-89d5ffd79-zjrc4                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  ingress-nginx               ingress-nginx-controller-7559cbf597-9grqv    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         5m52s
	  kube-system                 coredns-6f6b679f8f-n7dhw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 csi-hostpathplugin-92t54                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 etcd-addons-785001                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m5s
	  kube-system                 kindnet-qqc2d                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m
	  kube-system                 kube-apiserver-addons-785001                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-controller-manager-addons-785001        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-zhs6l                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-scheduler-addons-785001                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 metrics-server-8988944d9-25gt9               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m54s
	  kube-system                 nvidia-device-plugin-daemonset-g9jrt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 registry-6fb4cdfc84-jx2zj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 registry-proxy-l6r74                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-95c54         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  kube-system                 snapshot-controller-56fcc65765-x64dv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  local-path-storage          local-path-provisioner-86d989889c-l6dcc      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  volcano-system              volcano-admission-77d7d48b68-2s99n           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  volcano-system              volcano-controllers-56675bb4d5-8q2tt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  volcano-system              volcano-scheduler-576bc46687-7m9zn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-kn98c               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             638Mi (8%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m58s  kube-proxy       
	  Normal   Starting                 6m5s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m5s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m5s   kubelet          Node addons-785001 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m5s   kubelet          Node addons-785001 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m5s   kubelet          Node addons-785001 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m1s   node-controller  Node addons-785001 event: Registered Node addons-785001 in Controller
	
	
	==> dmesg <==
	[Aug13 23:25] hrtimer: interrupt took 61159531 ns
	[Aug13 23:26] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000930] FS-Cache: O-cookie d=00000000d9aec83b{9P.session} n=0000000064e35e94
	[  +0.001019] FS-Cache: O-key=[10] '34323937373239343730'
	[  +0.000715] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000865] FS-Cache: N-cookie d=00000000d9aec83b{9P.session} n=0000000040e689ea
	[  +0.000997] FS-Cache: N-key=[10] '34323937373239343730'
	
	
	==> etcd [b6c0e5f68eaab1584b1d567565e5e4848e675ec0380f48f21d34c7b679d8a4ee] <==
	{"level":"info","ts":"2024-08-14T00:27:58.692452Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T00:27:58.692552Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-14T00:27:58.692567Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-14T00:27:58.692888Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T00:27:58.692919Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T00:27:59.167524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-14T00:27:59.167583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-14T00:27:59.167611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-14T00:27:59.167633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-14T00:27:59.167639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-14T00:27:59.167650Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-14T00:27:59.167658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-14T00:27:59.170846Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-785001 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T00:27:59.171026Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:27:59.171134Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:27:59.171444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:27:59.172232Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:27:59.174709Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T00:27:59.177766Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T00:27:59.178068Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:27:59.178254Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:27:59.178351Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:27:59.179127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:27:59.180186Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-14T00:27:59.187212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [d20f57f75dfa3cd00df4f965eac0d661e014bb301b36ac932d34edea50f6dc0b] <==
	2024/08/14 00:30:51 GCP Auth Webhook started!
	2024/08/14 00:31:08 Ready to marshal response ...
	2024/08/14 00:31:08 Ready to write response ...
	2024/08/14 00:31:08 Ready to marshal response ...
	2024/08/14 00:31:08 Ready to write response ...
	
	
	==> kernel <==
	 00:34:10 up  4:16,  0 users,  load average: 0.49, 1.12, 1.98
	Linux addons-785001 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [ab529dab7002ac378b6fe71fe9d38d9008ff886e94d8b5420cba7b9f6e6938a3] <==
	I0814 00:32:42.443837       1 main.go:299] handling current node
	I0814 00:32:52.444138       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 00:32:52.444178       1 main.go:299] handling current node
	I0814 00:33:02.443871       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 00:33:02.443908       1 main.go:299] handling current node
	I0814 00:33:12.443708       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 00:33:12.443748       1 main.go:299] handling current node
	W0814 00:33:16.446003       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 00:33:16.446038       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0814 00:33:20.875306       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0814 00:33:20.875354       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0814 00:33:22.444358       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 00:33:22.444405       1 main.go:299] handling current node
	I0814 00:33:32.444048       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 00:33:32.444089       1 main.go:299] handling current node
	W0814 00:33:38.582209       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0814 00:33:38.582249       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0814 00:33:42.444194       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 00:33:42.444230       1 main.go:299] handling current node
	I0814 00:33:52.444035       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 00:33:52.444071       1 main.go:299] handling current node
	W0814 00:33:54.661558       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 00:33:54.661594       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0814 00:34:02.443721       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 00:34:02.443769       1 main.go:299] handling current node
	
	
	==> kube-apiserver [486c5482558d43c5010a7a1b5dc088e7a9e63b5506058822f3fb8b28ff806caa] <==
	W0814 00:29:19.912457       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:20.988703       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:22.089680       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:23.157545       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:24.180988       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:25.209913       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:26.231896       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.212.45:443: connect: connection refused
	E0814 00:29:26.231944       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.212.45:443: connect: connection refused" logger="UnhandledError"
	W0814 00:29:26.233591       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:26.286625       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.212.45:443: connect: connection refused
	E0814 00:29:26.286666       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.212.45:443: connect: connection refused" logger="UnhandledError"
	W0814 00:29:26.288408       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:26.292570       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:27.360609       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:28.456012       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:29.499845       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:30.579960       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.231.7:443: connect: connection refused
	W0814 00:29:45.173034       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.212.45:443: connect: connection refused
	E0814 00:29:45.173181       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.212.45:443: connect: connection refused" logger="UnhandledError"
	W0814 00:30:26.244611       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.212.45:443: connect: connection refused
	E0814 00:30:26.244660       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.212.45:443: connect: connection refused" logger="UnhandledError"
	W0814 00:30:26.295279       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.212.45:443: connect: connection refused
	E0814 00:30:26.295377       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.212.45:443: connect: connection refused" logger="UnhandledError"
	I0814 00:31:08.667246       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0814 00:31:08.732606       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [df3c09579724d4d5db1d6ba547aece814b507b5172717f14b7fa0557f467e705] <==
	I0814 00:30:26.275512       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0814 00:30:26.277116       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0814 00:30:26.289504       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0814 00:30:26.303760       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0814 00:30:26.312495       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0814 00:30:26.316859       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0814 00:30:26.331668       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0814 00:30:27.494514       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0814 00:30:27.505753       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0814 00:30:28.617793       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0814 00:30:28.640409       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0814 00:30:29.623722       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0814 00:30:29.632724       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0814 00:30:29.640021       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0814 00:30:29.647421       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0814 00:30:29.656977       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0814 00:30:29.663393       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0814 00:30:51.592975       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="13.322033ms"
	I0814 00:30:51.593285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="264.418µs"
	I0814 00:30:59.023218       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0814 00:30:59.026338       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0814 00:30:59.080941       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0814 00:30:59.086013       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0814 00:31:08.386880       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0814 00:31:09.031762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-785001"
	
	
	==> kube-proxy [678dc82d7b6c10e6050162281a88f33b598c8296e8b0cd62f83b538dc5544083] <==
	I0814 00:28:12.104481       1 server_linux.go:66] "Using iptables proxy"
	I0814 00:28:12.168315       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0814 00:28:12.168394       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:28:12.195729       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0814 00:28:12.195792       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:28:12.197981       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:28:12.198271       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:28:12.198287       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:28:12.199742       1 config.go:197] "Starting service config controller"
	I0814 00:28:12.199773       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:28:12.199795       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:28:12.199799       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:28:12.202501       1 config.go:326] "Starting node config controller"
	I0814 00:28:12.202517       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:28:12.300764       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 00:28:12.300827       1 shared_informer.go:320] Caches are synced for service config
	I0814 00:28:12.303174       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [77a7bed0ed1077fd608ea4cf178d55352876089d4a5212018ce7c69d14374286] <==
	W0814 00:28:03.092461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 00:28:03.092487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:03.092704       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 00:28:03.092750       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 00:28:03.911880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 00:28:03.912102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:03.974994       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 00:28:03.975045       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 00:28:03.975276       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 00:28:03.975328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:03.983350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 00:28:03.983588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:04.051847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 00:28:04.051958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:04.110936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 00:28:04.110985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:04.111753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 00:28:04.111791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:04.142372       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 00:28:04.142529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:04.159059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 00:28:04.159205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:04.166445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 00:28:04.166577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0814 00:28:06.257067       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 00:32:08 addons-785001 kubelet[1467]: E0814 00:32:08.785138    1467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-kjrnf_gadget(ed1b88d3-4514-4ea2-a47a-f78a56eb550b)\"" pod="gadget/gadget-kjrnf" podUID="ed1b88d3-4514-4ea2-a47a-f78a56eb550b"
	Aug 14 00:32:16 addons-785001 kubelet[1467]: I0814 00:32:16.608917    1467 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-jx2zj" secret="" err="secret \"gcp-auth\" not found"
	Aug 14 00:32:17 addons-785001 kubelet[1467]: I0814 00:32:17.609237    1467 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-g9jrt" secret="" err="secret \"gcp-auth\" not found"
	Aug 14 00:32:21 addons-785001 kubelet[1467]: I0814 00:32:21.609311    1467 scope.go:117] "RemoveContainer" containerID="f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91"
	Aug 14 00:32:21 addons-785001 kubelet[1467]: E0814 00:32:21.609947    1467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-kjrnf_gadget(ed1b88d3-4514-4ea2-a47a-f78a56eb550b)\"" pod="gadget/gadget-kjrnf" podUID="ed1b88d3-4514-4ea2-a47a-f78a56eb550b"
	Aug 14 00:32:22 addons-785001 kubelet[1467]: I0814 00:32:22.609630    1467 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-l6r74" secret="" err="secret \"gcp-auth\" not found"
	Aug 14 00:32:32 addons-785001 kubelet[1467]: I0814 00:32:32.609175    1467 scope.go:117] "RemoveContainer" containerID="f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91"
	Aug 14 00:32:32 addons-785001 kubelet[1467]: E0814 00:32:32.609360    1467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-kjrnf_gadget(ed1b88d3-4514-4ea2-a47a-f78a56eb550b)\"" pod="gadget/gadget-kjrnf" podUID="ed1b88d3-4514-4ea2-a47a-f78a56eb550b"
	Aug 14 00:32:45 addons-785001 kubelet[1467]: I0814 00:32:45.610068    1467 scope.go:117] "RemoveContainer" containerID="f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91"
	Aug 14 00:32:45 addons-785001 kubelet[1467]: E0814 00:32:45.611349    1467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-kjrnf_gadget(ed1b88d3-4514-4ea2-a47a-f78a56eb550b)\"" pod="gadget/gadget-kjrnf" podUID="ed1b88d3-4514-4ea2-a47a-f78a56eb550b"
	Aug 14 00:32:59 addons-785001 kubelet[1467]: I0814 00:32:59.609657    1467 scope.go:117] "RemoveContainer" containerID="f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91"
	Aug 14 00:32:59 addons-785001 kubelet[1467]: E0814 00:32:59.610303    1467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-kjrnf_gadget(ed1b88d3-4514-4ea2-a47a-f78a56eb550b)\"" pod="gadget/gadget-kjrnf" podUID="ed1b88d3-4514-4ea2-a47a-f78a56eb550b"
	Aug 14 00:33:11 addons-785001 kubelet[1467]: I0814 00:33:11.609805    1467 scope.go:117] "RemoveContainer" containerID="f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91"
	Aug 14 00:33:11 addons-785001 kubelet[1467]: E0814 00:33:11.610467    1467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-kjrnf_gadget(ed1b88d3-4514-4ea2-a47a-f78a56eb550b)\"" pod="gadget/gadget-kjrnf" podUID="ed1b88d3-4514-4ea2-a47a-f78a56eb550b"
	Aug 14 00:33:24 addons-785001 kubelet[1467]: I0814 00:33:24.608730    1467 scope.go:117] "RemoveContainer" containerID="f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91"
	Aug 14 00:33:24 addons-785001 kubelet[1467]: E0814 00:33:24.608947    1467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-kjrnf_gadget(ed1b88d3-4514-4ea2-a47a-f78a56eb550b)\"" pod="gadget/gadget-kjrnf" podUID="ed1b88d3-4514-4ea2-a47a-f78a56eb550b"
	Aug 14 00:33:26 addons-785001 kubelet[1467]: I0814 00:33:26.609494    1467 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-l6r74" secret="" err="secret \"gcp-auth\" not found"
	Aug 14 00:33:37 addons-785001 kubelet[1467]: I0814 00:33:37.609501    1467 scope.go:117] "RemoveContainer" containerID="f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91"
	Aug 14 00:33:37 addons-785001 kubelet[1467]: E0814 00:33:37.609690    1467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-kjrnf_gadget(ed1b88d3-4514-4ea2-a47a-f78a56eb550b)\"" pod="gadget/gadget-kjrnf" podUID="ed1b88d3-4514-4ea2-a47a-f78a56eb550b"
	Aug 14 00:33:38 addons-785001 kubelet[1467]: I0814 00:33:38.609422    1467 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-g9jrt" secret="" err="secret \"gcp-auth\" not found"
	Aug 14 00:33:44 addons-785001 kubelet[1467]: I0814 00:33:44.609322    1467 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-jx2zj" secret="" err="secret \"gcp-auth\" not found"
	Aug 14 00:33:49 addons-785001 kubelet[1467]: I0814 00:33:49.609661    1467 scope.go:117] "RemoveContainer" containerID="f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91"
	Aug 14 00:33:49 addons-785001 kubelet[1467]: E0814 00:33:49.610338    1467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-kjrnf_gadget(ed1b88d3-4514-4ea2-a47a-f78a56eb550b)\"" pod="gadget/gadget-kjrnf" podUID="ed1b88d3-4514-4ea2-a47a-f78a56eb550b"
	Aug 14 00:34:01 addons-785001 kubelet[1467]: I0814 00:34:01.608821    1467 scope.go:117] "RemoveContainer" containerID="f04c7e98f337beadb0439b22eb6224b69cf4a1ba321a98d1127eee28e034be91"
	Aug 14 00:34:01 addons-785001 kubelet[1467]: E0814 00:34:01.609015    1467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-kjrnf_gadget(ed1b88d3-4514-4ea2-a47a-f78a56eb550b)\"" pod="gadget/gadget-kjrnf" podUID="ed1b88d3-4514-4ea2-a47a-f78a56eb550b"
	
	
	==> storage-provisioner [fd8d28a8e68f0cf57610f16b4f4d55d1a7539f82a93c84857ca2b0bf0c58c6a4] <==
	I0814 00:28:16.915627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 00:28:16.938867       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 00:28:16.938939       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 00:28:16.955563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 00:28:16.957535       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-785001_bd2e28d5-3f3a-498b-9fc9-9ca3712566d3!
	I0814 00:28:16.964944       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ba29f08-14e9-4725-a5b8-26a1913ca115", APIVersion:"v1", ResourceVersion:"555", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-785001_bd2e28d5-3f3a-498b-9fc9-9ca3712566d3 became leader
	I0814 00:28:17.057807       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-785001_bd2e28d5-3f3a-498b-9fc9-9ca3712566d3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-785001 -n addons-785001
helpers_test.go:261: (dbg) Run:  kubectl --context addons-785001 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-jtcz4 ingress-nginx-admission-patch-94gp5 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-785001 describe pod ingress-nginx-admission-create-jtcz4 ingress-nginx-admission-patch-94gp5 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-785001 describe pod ingress-nginx-admission-create-jtcz4 ingress-nginx-admission-patch-94gp5 test-job-nginx-0: exit status 1 (86.021575ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jtcz4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-94gp5" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-785001 describe pod ingress-nginx-admission-create-jtcz4 ingress-nginx-admission-patch-94gp5 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.78s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.92
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 6.82
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.19
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 215.55
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 15.53
34 TestAddons/parallel/Ingress 19.92
35 TestAddons/parallel/InspektorGadget 12.32
36 TestAddons/parallel/MetricsServer 6.9
39 TestAddons/parallel/CSI 55.3
40 TestAddons/parallel/Headlamp 15.82
41 TestAddons/parallel/CloudSpanner 5.6
42 TestAddons/parallel/LocalPath 53.05
43 TestAddons/parallel/NvidiaDevicePlugin 5.64
44 TestAddons/parallel/Yakd 11.85
45 TestAddons/StoppedEnableDisable 12.32
46 TestCertOptions 38.78
47 TestCertExpiration 235.77
49 TestForceSystemdFlag 38.05
50 TestForceSystemdEnv 41.7
51 TestDockerEnvContainerd 47.01
56 TestErrorSpam/setup 30.31
57 TestErrorSpam/start 0.72
58 TestErrorSpam/status 1.06
59 TestErrorSpam/pause 1.79
60 TestErrorSpam/unpause 1.81
61 TestErrorSpam/stop 1.71
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 50.36
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.56
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.15
73 TestFunctional/serial/CacheCmd/cache/add_local 1.3
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.98
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 45.8
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.63
84 TestFunctional/serial/LogsFileCmd 1.74
85 TestFunctional/serial/InvalidService 4.54
87 TestFunctional/parallel/ConfigCmd 0.5
88 TestFunctional/parallel/DashboardCmd 9.31
89 TestFunctional/parallel/DryRun 0.44
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.27
95 TestFunctional/parallel/ServiceCmdConnect 9.63
96 TestFunctional/parallel/AddonsCmd 0.18
97 TestFunctional/parallel/PersistentVolumeClaim 25.61
99 TestFunctional/parallel/SSHCmd 0.82
100 TestFunctional/parallel/CpCmd 1.96
102 TestFunctional/parallel/FileSync 0.34
103 TestFunctional/parallel/CertSync 2.17
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
111 TestFunctional/parallel/License 0.26
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.32
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
125 TestFunctional/parallel/ServiceCmd/List 0.57
126 TestFunctional/parallel/ProfileCmd/profile_list 0.47
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
129 TestFunctional/parallel/MountCmd/any-port 7.21
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
131 TestFunctional/parallel/ServiceCmd/Format 0.41
132 TestFunctional/parallel/ServiceCmd/URL 0.44
133 TestFunctional/parallel/MountCmd/specific-port 2.53
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.72
135 TestFunctional/parallel/Version/short 0.08
136 TestFunctional/parallel/Version/components 1.26
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.04
142 TestFunctional/parallel/ImageCommands/Setup 0.76
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.3
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.56
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 137.74
160 TestMultiControlPlane/serial/DeployApp 30.48
161 TestMultiControlPlane/serial/PingHostFromPods 1.65
162 TestMultiControlPlane/serial/AddWorkerNode 24.21
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.74
165 TestMultiControlPlane/serial/CopyFile 18.75
166 TestMultiControlPlane/serial/StopSecondaryNode 12.92
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.85
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.8
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 149.57
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.58
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
173 TestMultiControlPlane/serial/StopCluster 36.03
174 TestMultiControlPlane/serial/RestartCluster 80.9
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
176 TestMultiControlPlane/serial/AddSecondaryNode 39.95
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
181 TestJSONOutput/start/Command 51.28
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.66
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.76
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 38.24
207 TestKicCustomNetwork/use_default_bridge_network 33.1
208 TestKicExistingNetwork 33.11
209 TestKicCustomSubnet 32.91
210 TestKicStaticIP 31.51
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 70.03
215 TestMountStart/serial/StartWithMountFirst 6.08
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 8.7
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.62
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.37
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 69.33
227 TestMultiNode/serial/DeployApp2Nodes 18.21
228 TestMultiNode/serial/PingHostFrom2Pods 1
229 TestMultiNode/serial/AddNode 17.36
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.32
232 TestMultiNode/serial/CopyFile 9.93
233 TestMultiNode/serial/StopNode 2.23
234 TestMultiNode/serial/StartAfterStop 9.71
235 TestMultiNode/serial/RestartKeepsNodes 81.51
236 TestMultiNode/serial/DeleteNode 5.27
237 TestMultiNode/serial/StopMultiNode 23.94
238 TestMultiNode/serial/RestartMultiNode 47.47
239 TestMultiNode/serial/ValidateNameConflict 33.84
244 TestPreload 120.03
246 TestScheduledStopUnix 108.86
249 TestInsufficientStorage 10.23
250 TestRunningBinaryUpgrade 83.87
252 TestKubernetesUpgrade 343.2
253 TestMissingContainerUpgrade 164.56
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
256 TestNoKubernetes/serial/StartWithK8s 36.94
257 TestNoKubernetes/serial/StartWithStopK8s 19.01
258 TestNoKubernetes/serial/Start 9.83
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
260 TestNoKubernetes/serial/ProfileList 1.3
261 TestNoKubernetes/serial/Stop 1.27
262 TestNoKubernetes/serial/StartNoArgs 6.78
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
264 TestStoppedBinaryUpgrade/Setup 0.86
265 TestStoppedBinaryUpgrade/Upgrade 108.45
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
275 TestPause/serial/Start 52.8
276 TestPause/serial/SecondStartNoReconfiguration 7.97
277 TestPause/serial/Pause 0.93
278 TestPause/serial/VerifyStatus 0.39
279 TestPause/serial/Unpause 0.87
280 TestPause/serial/PauseAgain 1
281 TestPause/serial/DeletePaused 3.17
282 TestPause/serial/VerifyDeletedResources 0.27
290 TestNetworkPlugins/group/false 4.54
295 TestStartStop/group/old-k8s-version/serial/FirstStart 158.34
296 TestStartStop/group/old-k8s-version/serial/DeployApp 7.55
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.67
298 TestStartStop/group/old-k8s-version/serial/Stop 12.69
300 TestStartStop/group/no-preload/serial/FirstStart 79.4
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
302 TestStartStop/group/old-k8s-version/serial/SecondStart 376.69
303 TestStartStop/group/no-preload/serial/DeployApp 8.4
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
305 TestStartStop/group/no-preload/serial/Stop 12.1
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/no-preload/serial/SecondStart 266.58
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
311 TestStartStop/group/no-preload/serial/Pause 3.12
313 TestStartStop/group/embed-certs/serial/FirstStart 60.87
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.15
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
317 TestStartStop/group/old-k8s-version/serial/Pause 3.87
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.94
320 TestStartStop/group/embed-certs/serial/DeployApp 8.32
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
322 TestStartStop/group/embed-certs/serial/Stop 12.01
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
324 TestStartStop/group/embed-certs/serial/SecondStart 266.61
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.36
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.06
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 269.48
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
333 TestStartStop/group/embed-certs/serial/Pause 3.03
335 TestStartStop/group/newest-cni/serial/FirstStart 39.05
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
339 TestStartStop/group/newest-cni/serial/Stop 1.31
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
341 TestStartStop/group/newest-cni/serial/SecondStart 23.2
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.45
345 TestNetworkPlugins/group/auto/Start 88.46
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
349 TestStartStop/group/newest-cni/serial/Pause 3.57
350 TestNetworkPlugins/group/kindnet/Start 58.68
351 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
353 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
354 TestNetworkPlugins/group/auto/KubeletFlags 0.27
355 TestNetworkPlugins/group/auto/NetCatPod 9.28
356 TestNetworkPlugins/group/kindnet/DNS 0.21
357 TestNetworkPlugins/group/kindnet/Localhost 0.17
358 TestNetworkPlugins/group/kindnet/HairPin 0.18
359 TestNetworkPlugins/group/auto/DNS 0.24
360 TestNetworkPlugins/group/auto/Localhost 0.23
361 TestNetworkPlugins/group/auto/HairPin 0.29
362 TestNetworkPlugins/group/calico/Start 73.73
363 TestNetworkPlugins/group/custom-flannel/Start 55.89
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.27
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/custom-flannel/DNS 0.22
368 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
369 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
370 TestNetworkPlugins/group/calico/KubeletFlags 0.3
371 TestNetworkPlugins/group/calico/NetCatPod 11.24
372 TestNetworkPlugins/group/calico/DNS 0.26
373 TestNetworkPlugins/group/calico/Localhost 0.23
374 TestNetworkPlugins/group/calico/HairPin 0.21
375 TestNetworkPlugins/group/enable-default-cni/Start 80.34
376 TestNetworkPlugins/group/flannel/Start 55.47
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
380 TestNetworkPlugins/group/flannel/NetCatPod 10.32
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.32
382 TestNetworkPlugins/group/flannel/DNS 0.17
383 TestNetworkPlugins/group/flannel/Localhost 0.19
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
385 TestNetworkPlugins/group/flannel/HairPin 0.26
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
388 TestNetworkPlugins/group/bridge/Start 42.49
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 10.27
391 TestNetworkPlugins/group/bridge/DNS 25.99
392 TestNetworkPlugins/group/bridge/Localhost 0.14
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-971583 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-971583 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.914979452s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-971583
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-971583: exit status 85 (70.076874ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-971583 | jenkins | v1.33.1 | 14 Aug 24 00:26 UTC |          |
	|         | -p download-only-971583        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 00:26:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 00:26:57.640218  593014 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:26:57.640361  593014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:26:57.640373  593014 out.go:304] Setting ErrFile to fd 2...
	I0814 00:26:57.640378  593014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:26:57.640654  593014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
	W0814 00:26:57.640799  593014 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19429-587614/.minikube/config/config.json: open /home/jenkins/minikube-integration/19429-587614/.minikube/config/config.json: no such file or directory
	I0814 00:26:57.641236  593014 out.go:298] Setting JSON to true
	I0814 00:26:57.642112  593014 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14962,"bootTime":1723580256,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0814 00:26:57.642184  593014 start.go:139] virtualization:  
	I0814 00:26:57.645841  593014 out.go:97] [download-only-971583] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0814 00:26:57.646039  593014 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19429-587614/.minikube/cache/preloaded-tarball: no such file or directory
	I0814 00:26:57.646081  593014 notify.go:220] Checking for updates...
	I0814 00:26:57.648546  593014 out.go:169] MINIKUBE_LOCATION=19429
	I0814 00:26:57.651258  593014 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:26:57.653701  593014 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	I0814 00:26:57.656019  593014 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	I0814 00:26:57.658439  593014 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0814 00:26:57.663440  593014 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0814 00:26:57.663729  593014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:26:57.688264  593014 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 00:26:57.688360  593014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 00:26:57.745268  593014 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-14 00:26:57.736175653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 00:26:57.745380  593014 docker.go:307] overlay module found
	I0814 00:26:57.747927  593014 out.go:97] Using the docker driver based on user configuration
	I0814 00:26:57.747966  593014 start.go:297] selected driver: docker
	I0814 00:26:57.747973  593014 start.go:901] validating driver "docker" against <nil>
	I0814 00:26:57.748088  593014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 00:26:57.800586  593014 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-14 00:26:57.791211031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 00:26:57.800757  593014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 00:26:57.801043  593014 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0814 00:26:57.801237  593014 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 00:26:57.804012  593014 out.go:169] Using Docker driver with root privileges
	I0814 00:26:57.806521  593014 cni.go:84] Creating CNI manager for ""
	I0814 00:26:57.806540  593014 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0814 00:26:57.806551  593014 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 00:26:57.806633  593014 start.go:340] cluster config:
	{Name:download-only-971583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-971583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:26:57.809268  593014 out.go:97] Starting "download-only-971583" primary control-plane node in "download-only-971583" cluster
	I0814 00:26:57.809288  593014 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0814 00:26:57.811660  593014 out.go:97] Pulling base image v0.0.44-1723567951-19429 ...
	I0814 00:26:57.811687  593014 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0814 00:26:57.811856  593014 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local docker daemon
	I0814 00:26:57.826195  593014 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 to local cache
	I0814 00:26:57.826363  593014 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory
	I0814 00:26:57.826470  593014 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 to local cache
	I0814 00:26:57.944868  593014 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0814 00:26:57.944894  593014 cache.go:56] Caching tarball of preloaded images
	I0814 00:26:57.945061  593014 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0814 00:26:57.948044  593014 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0814 00:26:57.948069  593014 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0814 00:26:58.053867  593014 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19429-587614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-971583 host does not exist
	  To start a cluster, run: "minikube start -p download-only-971583"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-971583
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-820317 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-820317 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.82030689s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-820317
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-820317: exit status 85 (68.31315ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-971583 | jenkins | v1.33.1 | 14 Aug 24 00:26 UTC |                     |
	|         | -p download-only-971583        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:27 UTC |
	| delete  | -p download-only-971583        | download-only-971583 | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC | 14 Aug 24 00:27 UTC |
	| start   | -o=json --download-only        | download-only-820317 | jenkins | v1.33.1 | 14 Aug 24 00:27 UTC |                     |
	|         | -p download-only-820317        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 00:27:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 00:27:07.953675  593219 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:27:07.953901  593219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:27:07.953928  593219 out.go:304] Setting ErrFile to fd 2...
	I0814 00:27:07.953949  593219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:27:07.954231  593219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
	I0814 00:27:07.954746  593219 out.go:298] Setting JSON to true
	I0814 00:27:07.955675  593219 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14972,"bootTime":1723580256,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0814 00:27:07.955777  593219 start.go:139] virtualization:  
	I0814 00:27:07.958269  593219 out.go:97] [download-only-820317] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0814 00:27:07.958465  593219 notify.go:220] Checking for updates...
	I0814 00:27:07.960609  593219 out.go:169] MINIKUBE_LOCATION=19429
	I0814 00:27:07.962699  593219 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:27:07.964755  593219 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	I0814 00:27:07.966419  593219 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	I0814 00:27:07.968290  593219 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0814 00:27:07.972306  593219 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0814 00:27:07.972615  593219 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:27:07.997724  593219 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 00:27:07.997832  593219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 00:27:08.059682  593219 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-14 00:27:08.049313502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 00:27:08.059801  593219 docker.go:307] overlay module found
	I0814 00:27:08.061900  593219 out.go:97] Using the docker driver based on user configuration
	I0814 00:27:08.061927  593219 start.go:297] selected driver: docker
	I0814 00:27:08.061934  593219 start.go:901] validating driver "docker" against <nil>
	I0814 00:27:08.062050  593219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 00:27:08.117272  593219 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-14 00:27:08.108098287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 00:27:08.117444  593219 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 00:27:08.117755  593219 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0814 00:27:08.117923  593219 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 00:27:08.119802  593219 out.go:169] Using Docker driver with root privileges
	I0814 00:27:08.121856  593219 cni.go:84] Creating CNI manager for ""
	I0814 00:27:08.121875  593219 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0814 00:27:08.121889  593219 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 00:27:08.121984  593219 start.go:340] cluster config:
	{Name:download-only-820317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-820317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:27:08.123613  593219 out.go:97] Starting "download-only-820317" primary control-plane node in "download-only-820317" cluster
	I0814 00:27:08.123639  593219 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0814 00:27:08.125186  593219 out.go:97] Pulling base image v0.0.44-1723567951-19429 ...
	I0814 00:27:08.125212  593219 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0814 00:27:08.125312  593219 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local docker daemon
	I0814 00:27:08.140445  593219 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 to local cache
	I0814 00:27:08.140584  593219 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory
	I0814 00:27:08.140605  593219 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory, skipping pull
	I0814 00:27:08.140611  593219 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 exists in cache, skipping pull
	I0814 00:27:08.140620  593219 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 as a tarball
	I0814 00:27:08.247409  593219 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0814 00:27:08.247437  593219 cache.go:56] Caching tarball of preloaded images
	I0814 00:27:08.247621  593219 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0814 00:27:08.249623  593219 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0814 00:27:08.249655  593219 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0814 00:27:08.334034  593219 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19429-587614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-820317 host does not exist
	  To start a cluster, run: "minikube start -p download-only-820317"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-820317
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-346973 --alsologtostderr --binary-mirror http://127.0.0.1:37587 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-346973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-346973
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-785001
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-785001: exit status 85 (75.289798ms)

                                                
                                                
-- stdout --
	* Profile "addons-785001" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-785001"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-785001
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-785001: exit status 85 (70.150171ms)

                                                
                                                
-- stdout --
	* Profile "addons-785001" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-785001"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (215.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-785001 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-785001 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m35.552166278s)
--- PASS: TestAddons/Setup (215.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-785001 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-785001 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.623902ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-jx2zj" [3e30be30-2743-4a1e-9993-0f606c2b2940] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005665507s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l6r74" [49927f73-0585-451a-a297-1a69cbe7f1ce] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003752212s
addons_test.go:342: (dbg) Run:  kubectl --context addons-785001 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-785001 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-785001 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.471310023s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 ip
2024/08/14 00:34:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-785001 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-785001 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-785001 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3daf8f78-301d-493a-b60b-6e4e8e214bb6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3daf8f78-301d-493a-b60b-6e4e8e214bb6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004073156s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-785001 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-785001 addons disable ingress-dns --alsologtostderr -v=1: (1.45178098s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-785001 addons disable ingress --alsologtostderr -v=1: (7.773896137s)
--- PASS: TestAddons/parallel/Ingress (19.92s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kjrnf" [ed1b88d3-4514-4ea2-a47a-f78a56eb550b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006389268s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-785001
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-785001: (6.316109368s)
--- PASS: TestAddons/parallel/InspektorGadget (12.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.224888ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-25gt9" [d49d2e62-4ffd-4d71-b136-92dacc67f2e8] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003918068s
addons_test.go:417: (dbg) Run:  kubectl --context addons-785001 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.283625ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-785001 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-785001 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [dccb5dfb-9058-49a1-a3fb-f344749ee3cc] Pending
helpers_test.go:344: "task-pv-pod" [dccb5dfb-9058-49a1-a3fb-f344749ee3cc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [dccb5dfb-9058-49a1-a3fb-f344749ee3cc] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003545838s
addons_test.go:590: (dbg) Run:  kubectl --context addons-785001 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-785001 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-785001 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-785001 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-785001 delete pod task-pv-pod: (1.196970688s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-785001 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-785001 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-785001 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2743d4e4-3c71-4b97-a818-f495327b1ac4] Pending
helpers_test.go:344: "task-pv-pod-restore" [2743d4e4-3c71-4b97-a818-f495327b1ac4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2743d4e4-3c71-4b97-a818-f495327b1ac4] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004165664s
addons_test.go:632: (dbg) Run:  kubectl --context addons-785001 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-785001 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-785001 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-785001 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.770351515s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-785001 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-785001 --alsologtostderr -v=1: (1.020708683s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-tzcn9" [889bc5b1-d205-4438-ad63-7902e2b93993] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-tzcn9" [889bc5b1-d205-4438-ad63-7902e2b93993] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-tzcn9" [889bc5b1-d205-4438-ad63-7902e2b93993] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003493618s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-785001 addons disable headlamp --alsologtostderr -v=1: (5.793239946s)
--- PASS: TestAddons/parallel/Headlamp (15.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-x9t4d" [408f4e2a-7405-4a3a-8195-833eba14f013] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00379348s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-785001
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-785001 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-785001 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785001 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e906c236-c296-4ad7-8bfd-a7bdd86e5e21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e906c236-c296-4ad7-8bfd-a7bdd86e5e21] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e906c236-c296-4ad7-8bfd-a7bdd86e5e21] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.006929664s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-785001 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 ssh "cat /opt/local-path-provisioner/pvc-715c88bf-6b8b-40ae-8d60-c28995e7cc40_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-785001 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-785001 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-785001 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.744394678s)
--- PASS: TestAddons/parallel/LocalPath (53.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-g9jrt" [cfa65ea4-62b7-4d0a-9676-26c484d6665c] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004287614s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-785001
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-kn98c" [6d6896a3-45bd-48da-a8be-f46d694be470] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003484363s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-785001 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-785001 addons disable yakd --alsologtostderr -v=1: (5.843161739s)
--- PASS: TestAddons/parallel/Yakd (11.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-785001
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-785001: (12.067012417s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-785001
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-785001
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-785001
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (38.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-826371 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-826371 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.069363764s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-826371 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-826371 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-826371 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-826371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-826371
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-826371: (2.020193821s)
--- PASS: TestCertOptions (38.78s)

                                                
                                    
x
+
TestCertExpiration (235.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-042043 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-042043 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (44.094793563s)
E0814 01:13:55.245320  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-042043 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-042043 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.28481732s)
helpers_test.go:175: Cleaning up "cert-expiration-042043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-042043
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-042043: (2.39037942s)
--- PASS: TestCertExpiration (235.77s)

                                                
                                    
x
+
TestForceSystemdFlag (38.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-585145 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-585145 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.248179623s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-585145 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-585145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-585145
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-585145: (2.141426565s)
--- PASS: TestForceSystemdFlag (38.05s)

                                                
                                    
x
+
TestForceSystemdEnv (41.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-169762 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-169762 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.071329766s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-169762 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-169762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-169762
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-169762: (2.162342145s)
--- PASS: TestForceSystemdEnv (41.70s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.01s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-932642 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-932642 --driver=docker  --container-runtime=containerd: (31.53310183s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-932642"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-fqqYfMkchASc/agent.612055" SSH_AGENT_PID="612056" DOCKER_HOST=ssh://docker@127.0.0.1:33513 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-fqqYfMkchASc/agent.612055" SSH_AGENT_PID="612056" DOCKER_HOST=ssh://docker@127.0.0.1:33513 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-fqqYfMkchASc/agent.612055" SSH_AGENT_PID="612056" DOCKER_HOST=ssh://docker@127.0.0.1:33513 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.182274054s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-fqqYfMkchASc/agent.612055" SSH_AGENT_PID="612056" DOCKER_HOST=ssh://docker@127.0.0.1:33513 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-932642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-932642
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-932642: (1.971812935s)
--- PASS: TestDockerEnvContainerd (47.01s)

                                                
                                    
x
+
TestErrorSpam/setup (30.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-779292 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-779292 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-779292 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-779292 --driver=docker  --container-runtime=containerd: (30.306803454s)
--- PASS: TestErrorSpam/setup (30.31s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 stop: (1.295310744s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-779292 --log_dir /tmp/nospam-779292 stop
--- PASS: TestErrorSpam/stop (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19429-587614/.minikube/files/etc/test/nested/copy/593008/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-519686 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-519686 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (50.357152217s)
--- PASS: TestFunctional/serial/StartWithProxy (50.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-519686 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-519686 --alsologtostderr -v=8: (6.562755987s)
functional_test.go:663: soft start took 6.564186225s for "functional-519686" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-519686 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 cache add registry.k8s.io/pause:3.1: (1.52569086s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 cache add registry.k8s.io/pause:3.3: (1.422135872s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 cache add registry.k8s.io/pause:latest: (1.200374208s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-519686 /tmp/TestFunctionalserialCacheCmdcacheadd_local461389511/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 cache add minikube-local-cache-test:functional-519686
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 cache delete minikube-local-cache-test:functional-519686
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-519686
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-519686 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.361701ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 cache reload: (1.097091065s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 kubectl -- --context functional-519686 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-519686 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-519686 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-519686 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.794682631s)
functional_test.go:761: restart took 45.794882286s for "functional-519686" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-519686 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 logs: (1.628803945s)
--- PASS: TestFunctional/serial/LogsCmd (1.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 logs --file /tmp/TestFunctionalserialLogsFileCmd222975095/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 logs --file /tmp/TestFunctionalserialLogsFileCmd222975095/001/logs.txt: (1.736625594s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-519686 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-519686
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-519686: exit status 115 (705.395626ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32460 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-519686 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-519686 config get cpus: exit status 14 (81.687135ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-519686 config get cpus: exit status 14 (75.726396ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-519686 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-519686 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 626544: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-519686 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-519686 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (200.335275ms)

                                                
                                                
-- stdout --
	* [functional-519686] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:40:34.479172  626244 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:40:34.479435  626244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:40:34.479469  626244 out.go:304] Setting ErrFile to fd 2...
	I0814 00:40:34.479489  626244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:40:34.479780  626244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
	I0814 00:40:34.480364  626244 out.go:298] Setting JSON to false
	I0814 00:40:34.481485  626244 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15779,"bootTime":1723580256,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0814 00:40:34.481588  626244 start.go:139] virtualization:  
	I0814 00:40:34.484450  626244 out.go:177] * [functional-519686] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0814 00:40:34.487302  626244 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:40:34.487399  626244 notify.go:220] Checking for updates...
	I0814 00:40:34.491892  626244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:40:34.493973  626244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	I0814 00:40:34.496501  626244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	I0814 00:40:34.498756  626244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0814 00:40:34.501572  626244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:40:34.504752  626244 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0814 00:40:34.505283  626244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:40:34.544173  626244 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 00:40:34.544338  626244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 00:40:34.609700  626244 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-14 00:40:34.593841304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 00:40:34.609811  626244 docker.go:307] overlay module found
	I0814 00:40:34.612400  626244 out.go:177] * Using the docker driver based on existing profile
	I0814 00:40:34.614254  626244 start.go:297] selected driver: docker
	I0814 00:40:34.614274  626244 start.go:901] validating driver "docker" against &{Name:functional-519686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-519686 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:40:34.614411  626244 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:40:34.617138  626244 out.go:177] 
	W0814 00:40:34.619109  626244 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0814 00:40:34.620966  626244 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-519686 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-519686 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-519686 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (202.133208ms)

                                                
                                                
-- stdout --
	* [functional-519686] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:40:34.276030  626200 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:40:34.276228  626200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:40:34.276235  626200 out.go:304] Setting ErrFile to fd 2...
	I0814 00:40:34.276240  626200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:40:34.276688  626200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
	I0814 00:40:34.277062  626200 out.go:298] Setting JSON to false
	I0814 00:40:34.278116  626200 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15779,"bootTime":1723580256,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0814 00:40:34.278182  626200 start.go:139] virtualization:  
	I0814 00:40:34.282003  626200 out.go:177] * [functional-519686] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0814 00:40:34.284798  626200 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:40:34.284971  626200 notify.go:220] Checking for updates...
	I0814 00:40:34.289935  626200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:40:34.292274  626200 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	I0814 00:40:34.295005  626200 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	I0814 00:40:34.297246  626200 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0814 00:40:34.299289  626200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:40:34.302214  626200 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0814 00:40:34.302764  626200 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:40:34.338914  626200 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 00:40:34.339484  626200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 00:40:34.400580  626200 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-14 00:40:34.390587371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 00:40:34.400692  626200 docker.go:307] overlay module found
	I0814 00:40:34.407608  626200 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0814 00:40:34.411215  626200 start.go:297] selected driver: docker
	I0814 00:40:34.411235  626200 start.go:901] validating driver "docker" against &{Name:functional-519686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-519686 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:40:34.411377  626200 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:40:34.414498  626200 out.go:177] 
	W0814 00:40:34.416760  626200 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0814 00:40:34.418986  626200 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-519686 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-519686 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-qt9v5" [8823dbe9-d60a-46aa-8e3c-d4da309fbacd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-qt9v5" [8823dbe9-d60a-46aa-8e3c-d4da309fbacd] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003901387s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30778
functional_test.go:1675: http://192.168.49.2:30778: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-qt9v5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30778
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [37f79213-1454-4476-9726-f6e6589e95a6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.009170413s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-519686 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-519686 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-519686 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-519686 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8979dba5-796f-483c-b2f4-7d48227a3292] Pending
helpers_test.go:344: "sp-pod" [8979dba5-796f-483c-b2f4-7d48227a3292] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8979dba5-796f-483c-b2f4-7d48227a3292] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004695922s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-519686 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-519686 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-519686 delete -f testdata/storage-provisioner/pod.yaml: (1.598442478s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-519686 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6b8ca90d-285f-45bf-8bfa-1a24cca2e5e9] Pending
helpers_test.go:344: "sp-pod" [6b8ca90d-285f-45bf-8bfa-1a24cca2e5e9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004260056s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-519686 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh -n functional-519686 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 cp functional-519686:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1298109053/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh -n functional-519686 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh -n functional-519686 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/593008/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo cat /etc/test/nested/copy/593008/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/593008.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo cat /etc/ssl/certs/593008.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/593008.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo cat /usr/share/ca-certificates/593008.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5930082.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo cat /etc/ssl/certs/5930082.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5930082.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo cat /usr/share/ca-certificates/5930082.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-519686 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-519686 ssh "sudo systemctl is-active docker": exit status 1 (252.920804ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-519686 ssh "sudo systemctl is-active crio": exit status 1 (283.075461ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-519686 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-519686 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-519686 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 623758: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-519686 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-519686 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-519686 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [79d41fee-545a-4109-8abf-2d1d3747c822] Pending
helpers_test.go:344: "nginx-svc" [79d41fee-545a-4109-8abf-2d1d3747c822] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [79d41fee-545a-4109-8abf-2d1d3747c822] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004663976s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-519686 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.52.181 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-519686 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-519686 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-519686 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-ljskr" [871b9a85-401b-4751-b996-74858008c159] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-ljskr" [871b9a85-401b-4751-b996-74858008c159] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.01168919s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "392.402267ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "76.381959ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 service list -o json
functional_test.go:1494: Took "600.050249ms" to run "out/minikube-linux-arm64 -p functional-519686 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "407.712037ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "49.964412ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdany-port3352801607/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723596031495528105" to /tmp/TestFunctionalparallelMountCmdany-port3352801607/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723596031495528105" to /tmp/TestFunctionalparallelMountCmdany-port3352801607/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723596031495528105" to /tmp/TestFunctionalparallelMountCmdany-port3352801607/001/test-1723596031495528105
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-519686 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (417.763876ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 14 00:40 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 14 00:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 14 00:40 test-1723596031495528105
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh cat /mount-9p/test-1723596031495528105
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-519686 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d7fc9e02-cae2-463c-8b84-75108af019d8] Pending
helpers_test.go:344: "busybox-mount" [d7fc9e02-cae2-463c-8b84-75108af019d8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d7fc9e02-cae2-463c-8b84-75108af019d8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d7fc9e02-cae2-463c-8b84-75108af019d8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003373944s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-519686 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdany-port3352801607/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31947
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31947
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdspecific-port3948405611/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-519686 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (531.368731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdspecific-port3948405611/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-519686 ssh "sudo umount -f /mount-9p": exit status 1 (363.948691ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-519686 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdspecific-port3948405611/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665836323/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665836323/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665836323/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-519686 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665836323/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665836323/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-519686 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665836323/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 version -o=json --components: (1.261782251s)
--- PASS: TestFunctional/parallel/Version/components (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-519686 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-519686
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-519686
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-519686 image ls --format short --alsologtostderr:
I0814 00:40:51.167017  629435 out.go:291] Setting OutFile to fd 1 ...
I0814 00:40:51.167199  629435 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:40:51.167220  629435 out.go:304] Setting ErrFile to fd 2...
I0814 00:40:51.167237  629435 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:40:51.167541  629435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
I0814 00:40:51.168174  629435 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0814 00:40:51.168320  629435 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0814 00:40:51.168831  629435 cli_runner.go:164] Run: docker container inspect functional-519686 --format={{.State.Status}}
I0814 00:40:51.190281  629435 ssh_runner.go:195] Run: systemctl --version
I0814 00:40:51.190337  629435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-519686
I0814 00:40:51.212624  629435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/functional-519686/id_rsa Username:docker}
I0814 00:40:51.308648  629435 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-519686 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-519686  | sha256:4c2ebf | 989B   |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:d7cd33 | 18.3MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-519686  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | latest             | sha256:235ff2 | 67.6MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-519686 image ls --format table --alsologtostderr:
I0814 00:40:51.746818  629588 out.go:291] Setting OutFile to fd 1 ...
I0814 00:40:51.747009  629588 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:40:51.747022  629588 out.go:304] Setting ErrFile to fd 2...
I0814 00:40:51.747028  629588 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:40:51.747320  629588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
I0814 00:40:51.748205  629588 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0814 00:40:51.748368  629588 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0814 00:40:51.748882  629588 cli_runner.go:164] Run: docker container inspect functional-519686 --format={{.State.Status}}
I0814 00:40:51.766401  629588 ssh_runner.go:195] Run: systemctl --version
I0814 00:40:51.766455  629588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-519686
I0814 00:40:51.789752  629588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/functional-519686/id_rsa Username:docker}
I0814 00:40:51.887514  629588 ssh_runner.go:195] Run: sudo crictl images --output json
E0814 00:40:52.178444  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:40:52.185235  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:40:52.196771  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:40:52.218220  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:40:52.259652  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:40:52.341105  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:40:52.502611  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:40:52.824388  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:40:53.466066  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-519686 image ls --format json --alsologtostderr:
[{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9"],"repoTags":["docker.io/libra
ry/nginx:alpine"],"size":"18253575"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":
"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51","repoDigests":["docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40"],"repoTags":["docker.io/library/nginx:latest"],"size":"67647657"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39
d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-519686"],"size":"2173567"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"7130
0"},{"id":"sha256:4c2ebfe4becab418b55bff14fe89a8a086f45fd8f45f2ace3486baeb2017fe77","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-519686"],"size":"989"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-519686 image ls --format json --alsologtostderr:
I0814 00:40:51.453758  629503 out.go:291] Setting OutFile to fd 1 ...
I0814 00:40:51.454036  629503 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:40:51.454066  629503 out.go:304] Setting ErrFile to fd 2...
I0814 00:40:51.454087  629503 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:40:51.454366  629503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
I0814 00:40:51.455235  629503 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0814 00:40:51.455428  629503 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0814 00:40:51.455960  629503 cli_runner.go:164] Run: docker container inspect functional-519686 --format={{.State.Status}}
I0814 00:40:51.489840  629503 ssh_runner.go:195] Run: systemctl --version
I0814 00:40:51.489899  629503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-519686
I0814 00:40:51.527627  629503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/functional-519686/id_rsa Username:docker}
I0814 00:40:51.619876  629503 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-519686 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:4c2ebfe4becab418b55bff14fe89a8a086f45fd8f45f2ace3486baeb2017fe77
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-519686
size: "989"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-519686
size: "2173567"
- id: sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
repoTags:
- docker.io/library/nginx:alpine
size: "18253575"
- id: sha256:235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51
repoDigests:
- docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40
repoTags:
- docker.io/library/nginx:latest
size: "67647657"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-519686 image ls --format yaml --alsologtostderr:
I0814 00:40:51.159211  629436 out.go:291] Setting OutFile to fd 1 ...
I0814 00:40:51.159860  629436 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:40:51.159914  629436 out.go:304] Setting ErrFile to fd 2...
I0814 00:40:51.159934  629436 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:40:51.160202  629436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
I0814 00:40:51.161067  629436 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0814 00:40:51.161211  629436 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0814 00:40:51.161840  629436 cli_runner.go:164] Run: docker container inspect functional-519686 --format={{.State.Status}}
I0814 00:40:51.180981  629436 ssh_runner.go:195] Run: systemctl --version
I0814 00:40:51.181046  629436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-519686
I0814 00:40:51.202382  629436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/functional-519686/id_rsa Username:docker}
I0814 00:40:51.291304  629436 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-519686 ssh pgrep buildkitd: exit status 1 (327.64486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image build -t localhost/my-image:functional-519686 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 image build -t localhost/my-image:functional-519686 testdata/build --alsologtostderr: (2.486937343s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-519686 image build -t localhost/my-image:functional-519686 testdata/build --alsologtostderr:
I0814 00:40:51.749425  629589 out.go:291] Setting OutFile to fd 1 ...
I0814 00:40:51.750313  629589 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:40:51.750353  629589 out.go:304] Setting ErrFile to fd 2...
I0814 00:40:51.750375  629589 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:40:51.750642  629589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
I0814 00:40:51.751511  629589 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0814 00:40:51.752183  629589 config.go:182] Loaded profile config "functional-519686": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0814 00:40:51.752716  629589 cli_runner.go:164] Run: docker container inspect functional-519686 --format={{.State.Status}}
I0814 00:40:51.773141  629589 ssh_runner.go:195] Run: systemctl --version
I0814 00:40:51.773202  629589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-519686
I0814 00:40:51.795851  629589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/functional-519686/id_rsa Username:docker}
I0814 00:40:51.896157  629589 build_images.go:161] Building image from path: /tmp/build.3274656056.tar
I0814 00:40:51.896229  629589 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0814 00:40:51.908046  629589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3274656056.tar
I0814 00:40:51.912821  629589 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3274656056.tar: stat -c "%s %y" /var/lib/minikube/build/build.3274656056.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3274656056.tar': No such file or directory
I0814 00:40:51.912863  629589 ssh_runner.go:362] scp /tmp/build.3274656056.tar --> /var/lib/minikube/build/build.3274656056.tar (3072 bytes)
I0814 00:40:51.955982  629589 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3274656056
I0814 00:40:51.966965  629589 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3274656056 -xf /var/lib/minikube/build/build.3274656056.tar
I0814 00:40:51.976306  629589 containerd.go:394] Building image: /var/lib/minikube/build/build.3274656056
I0814 00:40:51.976382  629589 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3274656056 --local dockerfile=/var/lib/minikube/build/build.3274656056 --output type=image,name=localhost/my-image:functional-519686
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:049fa55b6e7dad9498f0571cc8582c3d6aa73f5f5449f08b899dfc8314ef3883
#8 exporting manifest sha256:049fa55b6e7dad9498f0571cc8582c3d6aa73f5f5449f08b899dfc8314ef3883 0.0s done
#8 exporting config sha256:0911bcd15ccdec32af391959d7ffaf2474d9a1595c5073cd94e3a3df84be2669 0.0s done
#8 naming to localhost/my-image:functional-519686 done
#8 DONE 0.1s
I0814 00:40:54.131231  629589 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3274656056 --local dockerfile=/var/lib/minikube/build/build.3274656056 --output type=image,name=localhost/my-image:functional-519686: (2.154818024s)
I0814 00:40:54.131313  629589 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3274656056
I0814 00:40:54.142038  629589 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3274656056.tar
I0814 00:40:54.151510  629589 build_images.go:217] Built localhost/my-image:functional-519686 from /tmp/build.3274656056.tar
I0814 00:40:54.151541  629589 build_images.go:133] succeeded building to: functional-519686
I0814 00:40:54.151547  629589 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/08/14 00:40:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-519686
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image load --daemon kicbase/echo-server:functional-519686 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 image load --daemon kicbase/echo-server:functional-519686 --alsologtostderr: (1.280299793s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image load --daemon kicbase/echo-server:functional-519686 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 image load --daemon kicbase/echo-server:functional-519686 --alsologtostderr: (1.019742279s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-519686
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image load --daemon kicbase/echo-server:functional-519686 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-519686 image load --daemon kicbase/echo-server:functional-519686 --alsologtostderr: (1.038852936s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image save kicbase/echo-server:functional-519686 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image rm kicbase/echo-server:functional-519686 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-519686
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-519686 image save --daemon kicbase/echo-server:functional-519686 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-519686
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-519686
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-519686
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-519686
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (137.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-231343 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0814 00:40:57.310851  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:41:02.433054  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:41:12.675242  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:41:33.157453  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:42:14.118819  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-231343 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m16.946807049s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (137.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- rollout status deployment/busybox
E0814 00:43:36.040329  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-231343 -- rollout status deployment/busybox: (27.345668734s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-5cnkg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-9ffs8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-mrdpk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-5cnkg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-9ffs8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-mrdpk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-5cnkg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-9ffs8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-mrdpk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-5cnkg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-5cnkg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-9ffs8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-9ffs8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-mrdpk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-231343 -- exec busybox-7dff88458-mrdpk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-231343 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-231343 -v=7 --alsologtostderr: (23.213994499s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-231343 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-231343 status --output json -v=7 --alsologtostderr: (1.007111673s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp testdata/cp-test.txt ha-231343:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile76204422/001/cp-test_ha-231343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343:/home/docker/cp-test.txt ha-231343-m02:/home/docker/cp-test_ha-231343_ha-231343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m02 "sudo cat /home/docker/cp-test_ha-231343_ha-231343-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343:/home/docker/cp-test.txt ha-231343-m03:/home/docker/cp-test_ha-231343_ha-231343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m03 "sudo cat /home/docker/cp-test_ha-231343_ha-231343-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343:/home/docker/cp-test.txt ha-231343-m04:/home/docker/cp-test_ha-231343_ha-231343-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m04 "sudo cat /home/docker/cp-test_ha-231343_ha-231343-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp testdata/cp-test.txt ha-231343-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile76204422/001/cp-test_ha-231343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m02:/home/docker/cp-test.txt ha-231343:/home/docker/cp-test_ha-231343-m02_ha-231343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343 "sudo cat /home/docker/cp-test_ha-231343-m02_ha-231343.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m02:/home/docker/cp-test.txt ha-231343-m03:/home/docker/cp-test_ha-231343-m02_ha-231343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m03 "sudo cat /home/docker/cp-test_ha-231343-m02_ha-231343-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m02:/home/docker/cp-test.txt ha-231343-m04:/home/docker/cp-test_ha-231343-m02_ha-231343-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m04 "sudo cat /home/docker/cp-test_ha-231343-m02_ha-231343-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp testdata/cp-test.txt ha-231343-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile76204422/001/cp-test_ha-231343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m03:/home/docker/cp-test.txt ha-231343:/home/docker/cp-test_ha-231343-m03_ha-231343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343 "sudo cat /home/docker/cp-test_ha-231343-m03_ha-231343.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m03:/home/docker/cp-test.txt ha-231343-m02:/home/docker/cp-test_ha-231343-m03_ha-231343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m02 "sudo cat /home/docker/cp-test_ha-231343-m03_ha-231343-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m03:/home/docker/cp-test.txt ha-231343-m04:/home/docker/cp-test_ha-231343-m03_ha-231343-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m04 "sudo cat /home/docker/cp-test_ha-231343-m03_ha-231343-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp testdata/cp-test.txt ha-231343-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile76204422/001/cp-test_ha-231343-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m04:/home/docker/cp-test.txt ha-231343:/home/docker/cp-test_ha-231343-m04_ha-231343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343 "sudo cat /home/docker/cp-test_ha-231343-m04_ha-231343.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m04:/home/docker/cp-test.txt ha-231343-m02:/home/docker/cp-test_ha-231343-m04_ha-231343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m02 "sudo cat /home/docker/cp-test_ha-231343-m04_ha-231343-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 cp ha-231343-m04:/home/docker/cp-test.txt ha-231343-m03:/home/docker/cp-test_ha-231343-m04_ha-231343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 ssh -n ha-231343-m03 "sudo cat /home/docker/cp-test_ha-231343-m04_ha-231343-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-231343 node stop m02 -v=7 --alsologtostderr: (12.197013727s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr: exit status 7 (718.040378ms)

                                                
                                                
-- stdout --
	ha-231343
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-231343-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-231343-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-231343-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:44:43.156980  645989 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:44:43.157236  645989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:44:43.157260  645989 out.go:304] Setting ErrFile to fd 2...
	I0814 00:44:43.157282  645989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:44:43.157638  645989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
	I0814 00:44:43.158405  645989 out.go:298] Setting JSON to false
	I0814 00:44:43.158439  645989 mustload.go:65] Loading cluster: ha-231343
	I0814 00:44:43.158863  645989 notify.go:220] Checking for updates...
	I0814 00:44:43.159070  645989 config.go:182] Loaded profile config "ha-231343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0814 00:44:43.159086  645989 status.go:255] checking status of ha-231343 ...
	I0814 00:44:43.159904  645989 cli_runner.go:164] Run: docker container inspect ha-231343 --format={{.State.Status}}
	I0814 00:44:43.178293  645989 status.go:330] ha-231343 host status = "Running" (err=<nil>)
	I0814 00:44:43.178322  645989 host.go:66] Checking if "ha-231343" exists ...
	I0814 00:44:43.178614  645989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-231343
	I0814 00:44:43.212243  645989 host.go:66] Checking if "ha-231343" exists ...
	I0814 00:44:43.212648  645989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:44:43.212731  645989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-231343
	I0814 00:44:43.230599  645989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/ha-231343/id_rsa Username:docker}
	I0814 00:44:43.320175  645989 ssh_runner.go:195] Run: systemctl --version
	I0814 00:44:43.324570  645989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:44:43.336827  645989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 00:44:43.398416  645989 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-14 00:44:43.38742458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 00:44:43.399432  645989 kubeconfig.go:125] found "ha-231343" server: "https://192.168.49.254:8443"
	I0814 00:44:43.399484  645989 api_server.go:166] Checking apiserver status ...
	I0814 00:44:43.399553  645989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 00:44:43.411643  645989 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1460/cgroup
	I0814 00:44:43.421221  645989 api_server.go:182] apiserver freezer: "13:freezer:/docker/37082135d88e637f6cd1e04eaa480e83c8324550e3e2d2ee3a24c337771751be/kubepods/burstable/poda66eeaa3f223b06703e84ef5e7c11be6/a09cd417a634f24a930c41b583f43d0cbc440ff17b1a80078ec230aba6f34bec"
	I0814 00:44:43.421294  645989 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/37082135d88e637f6cd1e04eaa480e83c8324550e3e2d2ee3a24c337771751be/kubepods/burstable/poda66eeaa3f223b06703e84ef5e7c11be6/a09cd417a634f24a930c41b583f43d0cbc440ff17b1a80078ec230aba6f34bec/freezer.state
	I0814 00:44:43.430213  645989 api_server.go:204] freezer state: "THAWED"
	I0814 00:44:43.430239  645989 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0814 00:44:43.439601  645989 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0814 00:44:43.439631  645989 status.go:422] ha-231343 apiserver status = Running (err=<nil>)
	I0814 00:44:43.439642  645989 status.go:257] ha-231343 status: &{Name:ha-231343 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:44:43.439659  645989 status.go:255] checking status of ha-231343-m02 ...
	I0814 00:44:43.439974  645989 cli_runner.go:164] Run: docker container inspect ha-231343-m02 --format={{.State.Status}}
	I0814 00:44:43.457392  645989 status.go:330] ha-231343-m02 host status = "Stopped" (err=<nil>)
	I0814 00:44:43.457414  645989 status.go:343] host is not running, skipping remaining checks
	I0814 00:44:43.457421  645989 status.go:257] ha-231343-m02 status: &{Name:ha-231343-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:44:43.457448  645989 status.go:255] checking status of ha-231343-m03 ...
	I0814 00:44:43.457761  645989 cli_runner.go:164] Run: docker container inspect ha-231343-m03 --format={{.State.Status}}
	I0814 00:44:43.480798  645989 status.go:330] ha-231343-m03 host status = "Running" (err=<nil>)
	I0814 00:44:43.480822  645989 host.go:66] Checking if "ha-231343-m03" exists ...
	I0814 00:44:43.481161  645989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-231343-m03
	I0814 00:44:43.498324  645989 host.go:66] Checking if "ha-231343-m03" exists ...
	I0814 00:44:43.498854  645989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:44:43.498897  645989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-231343-m03
	I0814 00:44:43.516825  645989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/ha-231343-m03/id_rsa Username:docker}
	I0814 00:44:43.608104  645989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:44:43.620955  645989 kubeconfig.go:125] found "ha-231343" server: "https://192.168.49.254:8443"
	I0814 00:44:43.620988  645989 api_server.go:166] Checking apiserver status ...
	I0814 00:44:43.621034  645989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 00:44:43.631925  645989 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	I0814 00:44:43.642127  645989 api_server.go:182] apiserver freezer: "13:freezer:/docker/af280ed24f4bb2ad1aa3f1d890f56873f829fdba769ad1a1c6db424ef508148e/kubepods/burstable/pod68965f78617d9ac8a51ffcf930598d65/4d8b9c0bcb483d754cb48bfd077068d7334f3d789cfbd641734539b2aa9fdcb5"
	I0814 00:44:43.642221  645989 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/af280ed24f4bb2ad1aa3f1d890f56873f829fdba769ad1a1c6db424ef508148e/kubepods/burstable/pod68965f78617d9ac8a51ffcf930598d65/4d8b9c0bcb483d754cb48bfd077068d7334f3d789cfbd641734539b2aa9fdcb5/freezer.state
	I0814 00:44:43.651614  645989 api_server.go:204] freezer state: "THAWED"
	I0814 00:44:43.651642  645989 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0814 00:44:43.659687  645989 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0814 00:44:43.659767  645989 status.go:422] ha-231343-m03 apiserver status = Running (err=<nil>)
	I0814 00:44:43.659785  645989 status.go:257] ha-231343-m03 status: &{Name:ha-231343-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:44:43.659802  645989 status.go:255] checking status of ha-231343-m04 ...
	I0814 00:44:43.660146  645989 cli_runner.go:164] Run: docker container inspect ha-231343-m04 --format={{.State.Status}}
	I0814 00:44:43.678001  645989 status.go:330] ha-231343-m04 host status = "Running" (err=<nil>)
	I0814 00:44:43.678028  645989 host.go:66] Checking if "ha-231343-m04" exists ...
	I0814 00:44:43.678336  645989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-231343-m04
	I0814 00:44:43.696842  645989 host.go:66] Checking if "ha-231343-m04" exists ...
	I0814 00:44:43.697161  645989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:44:43.697213  645989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-231343-m04
	I0814 00:44:43.715464  645989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/ha-231343-m04/id_rsa Username:docker}
	I0814 00:44:43.808449  645989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:44:43.825973  645989 status.go:257] ha-231343-m04 status: &{Name:ha-231343-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-231343 node start m02 -v=7 --alsologtostderr: (17.71296582s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr: (1.022032616s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (149.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-231343 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-231343 -v=7 --alsologtostderr
E0814 00:45:04.496613  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:04.502962  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:04.514320  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:04.535693  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:04.577100  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:04.658455  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:04.819866  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:05.141635  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:05.783596  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:07.065072  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:09.627506  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:14.749237  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:24.991482  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-231343 -v=7 --alsologtostderr: (37.520107428s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-231343 --wait=true -v=7 --alsologtostderr
E0814 00:45:45.472828  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:45:52.177896  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:46:19.882096  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:46:26.434747  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-231343 --wait=true -v=7 --alsologtostderr: (1m51.89280468s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-231343
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (149.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-231343 node delete m03 -v=7 --alsologtostderr: (9.668901463s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 stop -v=7 --alsologtostderr
E0814 00:47:48.356773  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-231343 stop -v=7 --alsologtostderr: (35.918702648s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr: exit status 7 (112.574066ms)

                                                
                                                
-- stdout --
	ha-231343
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-231343-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-231343-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:48:20.709954  660263 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:48:20.710184  660263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:48:20.710211  660263 out.go:304] Setting ErrFile to fd 2...
	I0814 00:48:20.710229  660263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:48:20.710489  660263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
	I0814 00:48:20.710750  660263 out.go:298] Setting JSON to false
	I0814 00:48:20.710806  660263 mustload.go:65] Loading cluster: ha-231343
	I0814 00:48:20.710905  660263 notify.go:220] Checking for updates...
	I0814 00:48:20.711242  660263 config.go:182] Loaded profile config "ha-231343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0814 00:48:20.711264  660263 status.go:255] checking status of ha-231343 ...
	I0814 00:48:20.711755  660263 cli_runner.go:164] Run: docker container inspect ha-231343 --format={{.State.Status}}
	I0814 00:48:20.728587  660263 status.go:330] ha-231343 host status = "Stopped" (err=<nil>)
	I0814 00:48:20.728607  660263 status.go:343] host is not running, skipping remaining checks
	I0814 00:48:20.728614  660263 status.go:257] ha-231343 status: &{Name:ha-231343 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:48:20.728657  660263 status.go:255] checking status of ha-231343-m02 ...
	I0814 00:48:20.728973  660263 cli_runner.go:164] Run: docker container inspect ha-231343-m02 --format={{.State.Status}}
	I0814 00:48:20.752175  660263 status.go:330] ha-231343-m02 host status = "Stopped" (err=<nil>)
	I0814 00:48:20.752194  660263 status.go:343] host is not running, skipping remaining checks
	I0814 00:48:20.752201  660263 status.go:257] ha-231343-m02 status: &{Name:ha-231343-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:48:20.752220  660263 status.go:255] checking status of ha-231343-m04 ...
	I0814 00:48:20.752523  660263 cli_runner.go:164] Run: docker container inspect ha-231343-m04 --format={{.State.Status}}
	I0814 00:48:20.773676  660263 status.go:330] ha-231343-m04 host status = "Stopped" (err=<nil>)
	I0814 00:48:20.773706  660263 status.go:343] host is not running, skipping remaining checks
	I0814 00:48:20.773714  660263 status.go:257] ha-231343-m04 status: &{Name:ha-231343-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-231343 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-231343 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m19.974942013s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-231343 --control-plane -v=7 --alsologtostderr
E0814 00:50:04.496166  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-231343 --control-plane -v=7 --alsologtostderr: (38.972639667s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-231343 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.28s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-827975 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0814 00:50:32.198337  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:50:52.177619  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-827975 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (51.277944872s)
--- PASS: TestJSONOutput/start/Command (51.28s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-827975 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-827975 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-827975 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-827975 --output=json --user=testUser: (5.758808861s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-161801 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-161801 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.458348ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2083ef03-60a5-401e-b03c-c9da7ddc2884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-161801] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6090e0a-4d98-4d80-966c-1684db60c158","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19429"}}
	{"specversion":"1.0","id":"0321e02a-c785-4bd5-a685-1335ee4c0143","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f831c5c6-e3d1-4e97-8d81-d8faf1738ea4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig"}}
	{"specversion":"1.0","id":"c772895f-a887-45a9-9faa-e00e2ff581fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube"}}
	{"specversion":"1.0","id":"f29e9aed-f3dd-4a2c-bef8-1b18b620eb93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f9c594ab-02c8-4d4f-9941-e4c4dcbf5021","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b5ffb6a5-7cc4-43f7-90ea-b72893b0cb60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-161801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-161801
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-641764 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-641764 --network=: (36.087751881s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-641764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-641764
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-641764: (2.126394773s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.24s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-104016 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-104016 --network=bridge: (31.010986821s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-104016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-104016
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-104016: (2.068205605s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.10s)

                                                
                                    
x
+
TestKicExistingNetwork (33.11s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-288782 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-288782 --network=existing-network: (31.016646515s)
helpers_test.go:175: Cleaning up "existing-network-288782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-288782
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-288782: (1.921386175s)
--- PASS: TestKicExistingNetwork (33.11s)

                                                
                                    
x
+
TestKicCustomSubnet (32.91s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-365879 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-365879 --subnet=192.168.60.0/24: (30.821976842s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-365879 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-365879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-365879
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-365879: (2.062304612s)
--- PASS: TestKicCustomSubnet (32.91s)

                                                
                                    
x
+
TestKicStaticIP (31.51s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-471475 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-471475 --static-ip=192.168.200.200: (29.328011824s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-471475 ip
helpers_test.go:175: Cleaning up "static-ip-471475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-471475
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-471475: (2.029005345s)
--- PASS: TestKicStaticIP (31.51s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (70.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-755861 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-755861 --driver=docker  --container-runtime=containerd: (32.93913178s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-758823 --driver=docker  --container-runtime=containerd
E0814 00:55:04.496061  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-758823 --driver=docker  --container-runtime=containerd: (31.636681449s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-755861
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-758823
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-758823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-758823
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-758823: (2.00016599s)
helpers_test.go:175: Cleaning up "first-755861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-755861
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-755861: (2.205410966s)
--- PASS: TestMinikubeProfile (70.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-885443 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-885443 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.076040306s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-885443 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-898653 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-898653 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.699393085s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-898653 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-885443 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-885443 --alsologtostderr -v=5: (1.61982021s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-898653 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-898653
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-898653: (1.201481888s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-898653
E0814 00:55:52.178087  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-898653: (6.365234453s)
--- PASS: TestMountStart/serial/RestartStopped (7.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-898653 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-877501 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-877501 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.82689273s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- rollout status deployment/busybox
E0814 00:57:15.243895  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-877501 -- rollout status deployment/busybox: (16.292018708s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- exec busybox-7dff88458-4ptbs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- exec busybox-7dff88458-s64kc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- exec busybox-7dff88458-4ptbs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- exec busybox-7dff88458-s64kc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- exec busybox-7dff88458-4ptbs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- exec busybox-7dff88458-s64kc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- exec busybox-7dff88458-4ptbs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- exec busybox-7dff88458-4ptbs -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- exec busybox-7dff88458-s64kc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-877501 -- exec busybox-7dff88458-s64kc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-877501 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-877501 -v 3 --alsologtostderr: (16.72536393s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-877501 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp testdata/cp-test.txt multinode-877501:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp multinode-877501:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1021202874/001/cp-test_multinode-877501.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp multinode-877501:/home/docker/cp-test.txt multinode-877501-m02:/home/docker/cp-test_multinode-877501_multinode-877501-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m02 "sudo cat /home/docker/cp-test_multinode-877501_multinode-877501-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp multinode-877501:/home/docker/cp-test.txt multinode-877501-m03:/home/docker/cp-test_multinode-877501_multinode-877501-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m03 "sudo cat /home/docker/cp-test_multinode-877501_multinode-877501-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp testdata/cp-test.txt multinode-877501-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp multinode-877501-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1021202874/001/cp-test_multinode-877501-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp multinode-877501-m02:/home/docker/cp-test.txt multinode-877501:/home/docker/cp-test_multinode-877501-m02_multinode-877501.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501 "sudo cat /home/docker/cp-test_multinode-877501-m02_multinode-877501.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp multinode-877501-m02:/home/docker/cp-test.txt multinode-877501-m03:/home/docker/cp-test_multinode-877501-m02_multinode-877501-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m03 "sudo cat /home/docker/cp-test_multinode-877501-m02_multinode-877501-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp testdata/cp-test.txt multinode-877501-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp multinode-877501-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1021202874/001/cp-test_multinode-877501-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp multinode-877501-m03:/home/docker/cp-test.txt multinode-877501:/home/docker/cp-test_multinode-877501-m03_multinode-877501.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501 "sudo cat /home/docker/cp-test_multinode-877501-m03_multinode-877501.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 cp multinode-877501-m03:/home/docker/cp-test.txt multinode-877501-m02:/home/docker/cp-test_multinode-877501-m03_multinode-877501-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 ssh -n multinode-877501-m02 "sudo cat /home/docker/cp-test_multinode-877501-m03_multinode-877501-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-877501 node stop m03: (1.216401721s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-877501 status: exit status 7 (505.905361ms)

                                                
                                                
-- stdout --
	multinode-877501
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-877501-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-877501-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-877501 status --alsologtostderr: exit status 7 (507.740308ms)

                                                
                                                
-- stdout --
	multinode-877501
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-877501-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-877501-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:57:58.869424  713688 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:57:58.869652  713688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:57:58.869680  713688 out.go:304] Setting ErrFile to fd 2...
	I0814 00:57:58.869698  713688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:57:58.869965  713688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
	I0814 00:57:58.870222  713688 out.go:298] Setting JSON to false
	I0814 00:57:58.870310  713688 mustload.go:65] Loading cluster: multinode-877501
	I0814 00:57:58.870456  713688 notify.go:220] Checking for updates...
	I0814 00:57:58.870887  713688 config.go:182] Loaded profile config "multinode-877501": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0814 00:57:58.870908  713688 status.go:255] checking status of multinode-877501 ...
	I0814 00:57:58.871607  713688 cli_runner.go:164] Run: docker container inspect multinode-877501 --format={{.State.Status}}
	I0814 00:57:58.894425  713688 status.go:330] multinode-877501 host status = "Running" (err=<nil>)
	I0814 00:57:58.894453  713688 host.go:66] Checking if "multinode-877501" exists ...
	I0814 00:57:58.894945  713688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-877501
	I0814 00:57:58.923923  713688 host.go:66] Checking if "multinode-877501" exists ...
	I0814 00:57:58.924232  713688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:57:58.924282  713688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-877501
	I0814 00:57:58.942221  713688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33648 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/multinode-877501/id_rsa Username:docker}
	I0814 00:57:59.036173  713688 ssh_runner.go:195] Run: systemctl --version
	I0814 00:57:59.040547  713688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:57:59.052066  713688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 00:57:59.116471  713688 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-14 00:57:59.10636829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 00:57:59.117053  713688 kubeconfig.go:125] found "multinode-877501" server: "https://192.168.67.2:8443"
	I0814 00:57:59.117093  713688 api_server.go:166] Checking apiserver status ...
	I0814 00:57:59.117136  713688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 00:57:59.129261  713688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1458/cgroup
	I0814 00:57:59.140134  713688 api_server.go:182] apiserver freezer: "13:freezer:/docker/6ddbb283c5f2afe2240968ffbb049346290a06e94ac633cb65823eb2f0eb5806/kubepods/burstable/podbe4d74fbf0126e87a4b7af74def9204a/5d7362afa9d23022b68d0fe1fabf808b9c3fe8ee8c42336b7548a1f5a75b90d1"
	I0814 00:57:59.140216  713688 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6ddbb283c5f2afe2240968ffbb049346290a06e94ac633cb65823eb2f0eb5806/kubepods/burstable/podbe4d74fbf0126e87a4b7af74def9204a/5d7362afa9d23022b68d0fe1fabf808b9c3fe8ee8c42336b7548a1f5a75b90d1/freezer.state
	I0814 00:57:59.149872  713688 api_server.go:204] freezer state: "THAWED"
	I0814 00:57:59.149901  713688 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0814 00:57:59.158087  713688 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0814 00:57:59.158125  713688 status.go:422] multinode-877501 apiserver status = Running (err=<nil>)
	I0814 00:57:59.158137  713688 status.go:257] multinode-877501 status: &{Name:multinode-877501 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:57:59.158164  713688 status.go:255] checking status of multinode-877501-m02 ...
	I0814 00:57:59.158500  713688 cli_runner.go:164] Run: docker container inspect multinode-877501-m02 --format={{.State.Status}}
	I0814 00:57:59.174848  713688 status.go:330] multinode-877501-m02 host status = "Running" (err=<nil>)
	I0814 00:57:59.174879  713688 host.go:66] Checking if "multinode-877501-m02" exists ...
	I0814 00:57:59.175227  713688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-877501-m02
	I0814 00:57:59.191028  713688 host.go:66] Checking if "multinode-877501-m02" exists ...
	I0814 00:57:59.191386  713688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:57:59.191443  713688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-877501-m02
	I0814 00:57:59.208283  713688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33653 SSHKeyPath:/home/jenkins/minikube-integration/19429-587614/.minikube/machines/multinode-877501-m02/id_rsa Username:docker}
	I0814 00:57:59.295648  713688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:57:59.307537  713688 status.go:257] multinode-877501-m02 status: &{Name:multinode-877501-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:57:59.307573  713688 status.go:255] checking status of multinode-877501-m03 ...
	I0814 00:57:59.307906  713688 cli_runner.go:164] Run: docker container inspect multinode-877501-m03 --format={{.State.Status}}
	I0814 00:57:59.323817  713688 status.go:330] multinode-877501-m03 host status = "Stopped" (err=<nil>)
	I0814 00:57:59.323843  713688 status.go:343] host is not running, skipping remaining checks
	I0814 00:57:59.323853  713688 status.go:257] multinode-877501-m03 status: &{Name:multinode-877501-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-877501 node start m03 -v=7 --alsologtostderr: (8.958910878s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-877501
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-877501
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-877501: (24.884309051s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-877501 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-877501 --wait=true -v=8 --alsologtostderr: (56.499047135s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-877501
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-877501 node delete m03: (4.612047465s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-877501 stop: (23.757555793s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-877501 status: exit status 7 (95.754086ms)

                                                
                                                
-- stdout --
	multinode-877501
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-877501-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-877501 status --alsologtostderr: exit status 7 (91.031232ms)

                                                
                                                
-- stdout --
	multinode-877501
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-877501-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:59:59.709714  721665 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:59:59.709868  721665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:59:59.709880  721665 out.go:304] Setting ErrFile to fd 2...
	I0814 00:59:59.709886  721665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:59:59.710147  721665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
	I0814 00:59:59.710389  721665 out.go:298] Setting JSON to false
	I0814 00:59:59.710434  721665 mustload.go:65] Loading cluster: multinode-877501
	I0814 00:59:59.710542  721665 notify.go:220] Checking for updates...
	I0814 00:59:59.710907  721665 config.go:182] Loaded profile config "multinode-877501": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0814 00:59:59.710922  721665 status.go:255] checking status of multinode-877501 ...
	I0814 00:59:59.711760  721665 cli_runner.go:164] Run: docker container inspect multinode-877501 --format={{.State.Status}}
	I0814 00:59:59.729595  721665 status.go:330] multinode-877501 host status = "Stopped" (err=<nil>)
	I0814 00:59:59.729620  721665 status.go:343] host is not running, skipping remaining checks
	I0814 00:59:59.729628  721665 status.go:257] multinode-877501 status: &{Name:multinode-877501 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:59:59.729660  721665 status.go:255] checking status of multinode-877501-m02 ...
	I0814 00:59:59.729984  721665 cli_runner.go:164] Run: docker container inspect multinode-877501-m02 --format={{.State.Status}}
	I0814 00:59:59.753342  721665 status.go:330] multinode-877501-m02 host status = "Stopped" (err=<nil>)
	I0814 00:59:59.753368  721665 status.go:343] host is not running, skipping remaining checks
	I0814 00:59:59.753376  721665 status.go:257] multinode-877501-m02 status: &{Name:multinode-877501-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-877501 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0814 01:00:04.496690  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-877501 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (46.817104601s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-877501 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-877501
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-877501-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-877501-m02 --driver=docker  --container-runtime=containerd: exit status 14 (85.940154ms)

                                                
                                                
-- stdout --
	* [multinode-877501-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-877501-m02' is duplicated with machine name 'multinode-877501-m02' in profile 'multinode-877501'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-877501-m03 --driver=docker  --container-runtime=containerd
E0814 01:00:52.178264  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-877501-m03 --driver=docker  --container-runtime=containerd: (31.436367775s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-877501
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-877501: exit status 80 (321.833569ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-877501 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-877501-m03 already exists in multinode-877501-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-877501-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-877501-m03: (1.947009017s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.84s)

                                                
                                    
x
+
TestPreload (120.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-957125 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0814 01:01:27.560537  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-957125 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m16.881650742s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-957125 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-957125 image pull gcr.io/k8s-minikube/busybox: (1.265871706s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-957125
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-957125: (12.064324785s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-957125 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-957125 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (27.286203615s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-957125 image list
helpers_test.go:175: Cleaning up "test-preload-957125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-957125
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-957125: (2.288865195s)
--- PASS: TestPreload (120.03s)

                                                
                                    
x
+
TestScheduledStopUnix (108.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-948986 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-948986 --memory=2048 --driver=docker  --container-runtime=containerd: (32.670677499s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-948986 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-948986 -n scheduled-stop-948986
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-948986 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-948986 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-948986 -n scheduled-stop-948986
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-948986
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-948986 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0814 01:05:04.495809  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-948986
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-948986: exit status 7 (76.527446ms)

                                                
                                                
-- stdout --
	scheduled-stop-948986
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-948986 -n scheduled-stop-948986
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-948986 -n scheduled-stop-948986: exit status 7 (67.408316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-948986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-948986
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-948986: (4.656846965s)
--- PASS: TestScheduledStopUnix (108.86s)

                                                
                                    
x
+
TestInsufficientStorage (10.23s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-794279 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-794279 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.768614012s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4461eeca-d8ed-4d18-8a3f-11033f5dd2b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-794279] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6437fe2-4180-4232-a0c0-10200815adfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19429"}}
	{"specversion":"1.0","id":"59f11504-366f-46f0-833d-bed5be330aea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4b11fca3-0ac4-4b66-a142-f605f8d0801e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig"}}
	{"specversion":"1.0","id":"02ca92e2-2ba2-42b4-9058-3299bd89e7fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube"}}
	{"specversion":"1.0","id":"7c97fe33-072d-481e-be07-c152b9f05e15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c5117f80-aef2-49ac-af50-cd081efb78e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ad34d2eb-cfd0-4ac1-9eb5-eaa65d86e29b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ee52813b-ee79-43f4-8426-590696c0ca7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f7506048-0566-48e9-9c44-94ad96e9cf60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c1008a8-16f9-4059-ad70-e9f00fe80590","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ebae45e0-efd0-45c5-b557-24dd9009daa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-794279\" primary control-plane node in \"insufficient-storage-794279\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0da7f93a-7580-4dad-8f5f-94ae00f06ac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723567951-19429 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"262cbc62-f6bf-4ac4-936c-485af7aefa08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d12c9a34-67af-4676-a208-b592fc285604","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-794279 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-794279 --output=json --layout=cluster: exit status 7 (282.097152ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-794279","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-794279","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:05:21.908290  740257 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-794279" does not appear in /home/jenkins/minikube-integration/19429-587614/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-794279 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-794279 --output=json --layout=cluster: exit status 7 (295.104635ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-794279","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-794279","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:05:22.202459  740317 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-794279" does not appear in /home/jenkins/minikube-integration/19429-587614/kubeconfig
	E0814 01:05:22.212847  740317 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/insufficient-storage-794279/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-794279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-794279
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-794279: (1.884592073s)
--- PASS: TestInsufficientStorage (10.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (83.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1534043064 start -p running-upgrade-054550 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0814 01:10:04.496360  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1534043064 start -p running-upgrade-054550 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.59611544s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-054550 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0814 01:10:52.178646  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-054550 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.792888866s)
helpers_test.go:175: Cleaning up "running-upgrade-054550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-054550
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-054550: (2.282856933s)
--- PASS: TestRunningBinaryUpgrade (83.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (343.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-299238 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-299238 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.706047985s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-299238
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-299238: (1.218295859s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-299238 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-299238 status --format={{.Host}}: exit status 7 (97.270798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-299238 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-299238 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m35.592119715s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-299238 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-299238 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-299238 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (81.137668ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-299238] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-299238
	    minikube start -p kubernetes-upgrade-299238 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2992382 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-299238 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-299238 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-299238 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.108406161s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-299238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-299238
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-299238: (2.281529312s)
--- PASS: TestKubernetesUpgrade (343.20s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.56s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3922997869 start -p missing-upgrade-138302 --memory=2200 --driver=docker  --container-runtime=containerd
E0814 01:05:52.180179  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3922997869 start -p missing-upgrade-138302 --memory=2200 --driver=docker  --container-runtime=containerd: (1m11.448162177s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-138302
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-138302: (12.585542578s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-138302
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-138302 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-138302 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m17.477333118s)
helpers_test.go:175: Cleaning up "missing-upgrade-138302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-138302
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-138302: (2.217499685s)
--- PASS: TestMissingContainerUpgrade (164.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-452164 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-452164 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (71.744094ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-452164] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-452164 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-452164 --driver=docker  --container-runtime=containerd: (36.247761905s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-452164 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-452164 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-452164 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.509046072s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-452164 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-452164 status -o json: exit status 2 (376.633146ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-452164","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-452164
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-452164: (2.12856462s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-452164 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-452164 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.834681917s)
--- PASS: TestNoKubernetes/serial/Start (9.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-452164 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-452164 "sudo systemctl is-active --quiet service kubelet": exit status 1 (324.451637ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-452164
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-452164: (1.271115736s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-452164 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-452164 --driver=docker  --container-runtime=containerd: (6.778291115s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-452164 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-452164 "sudo systemctl is-active --quiet service kubelet": exit status 1 (261.146421ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (108.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3870550884 start -p stopped-upgrade-475535 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3870550884 start -p stopped-upgrade-475535 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.190267469s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3870550884 -p stopped-upgrade-475535 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3870550884 -p stopped-upgrade-475535 stop: (19.936634245s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-475535 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-475535 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.32603095s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (108.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-475535
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-475535: (1.154322092s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (52.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-964968 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-964968 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (52.801087733s)
--- PASS: TestPause/serial/Start (52.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-964968 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-964968 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.931791989s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.97s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-964968 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-964968 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-964968 --output=json --layout=cluster: exit status 2 (385.312379ms)

                                                
                                                
-- stdout --
	{"Name":"pause-964968","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-964968","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-964968 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-964968 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-964968 --alsologtostderr -v=5: (1.002281658s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.17s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-964968 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-964968 --alsologtostderr -v=5: (3.167068741s)
--- PASS: TestPause/serial/DeletePaused (3.17s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-964968
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-964968: exit status 1 (19.605907ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-964968: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-789888 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-789888 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (264.989851ms)

                                                
                                                
-- stdout --
	* [false-789888] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 01:12:37.503939  780908 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:12:37.504067  780908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:12:37.504073  780908 out.go:304] Setting ErrFile to fd 2...
	I0814 01:12:37.504078  780908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:12:37.504331  780908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-587614/.minikube/bin
	I0814 01:12:37.504739  780908 out.go:298] Setting JSON to false
	I0814 01:12:37.505645  780908 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17702,"bootTime":1723580256,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0814 01:12:37.505708  780908 start.go:139] virtualization:  
	I0814 01:12:37.508065  780908 out.go:177] * [false-789888] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0814 01:12:37.510219  780908 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:12:37.510388  780908 notify.go:220] Checking for updates...
	I0814 01:12:37.514552  780908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:12:37.516474  780908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-587614/kubeconfig
	I0814 01:12:37.518108  780908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-587614/.minikube
	I0814 01:12:37.519844  780908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0814 01:12:37.521821  780908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:12:37.524135  780908 config.go:182] Loaded profile config "force-systemd-flag-585145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0814 01:12:37.524244  780908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:12:37.565479  780908 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 01:12:37.565653  780908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 01:12:37.689903  780908 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-14 01:12:37.676455095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0814 01:12:37.690013  780908 docker.go:307] overlay module found
	I0814 01:12:37.693267  780908 out.go:177] * Using the docker driver based on user configuration
	I0814 01:12:37.694814  780908 start.go:297] selected driver: docker
	I0814 01:12:37.694829  780908 start.go:901] validating driver "docker" against <nil>
	I0814 01:12:37.694849  780908 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:12:37.697316  780908 out.go:177] 
	W0814 01:12:37.699017  780908 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0814 01:12:37.700953  780908 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-789888 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-789888" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-789888

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789888"

                                                
                                                
----------------------- debugLogs end: false-789888 [took: 4.052108148s] --------------------------------
helpers_test.go:175: Cleaning up "false-789888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-789888
--- PASS: TestNetworkPlugins/group/false (4.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (158.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-620816 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0814 01:15:04.496360  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:15:52.177626  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-620816 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m38.34047564s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (158.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-620816 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2f3c6f03-83f7-44fa-870e-fc11f8892277] Pending
helpers_test.go:344: "busybox" [2f3c6f03-83f7-44fa-870e-fc11f8892277] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2f3c6f03-83f7-44fa-870e-fc11f8892277] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.004204619s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-620816 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-620816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-620816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.469236335s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-620816 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-620816 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-620816 --alsologtostderr -v=3: (12.687220543s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-451250 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-451250 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m19.400121543s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-620816 -n old-k8s-version-620816
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-620816 -n old-k8s-version-620816: exit status 7 (108.285435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-620816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (376.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-620816 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0814 01:18:07.562769  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-620816 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (6m16.215420118s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-620816 -n old-k8s-version-620816
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (376.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-451250 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9862e8c0-80da-453b-bbde-3a1488867c01] Pending
helpers_test.go:344: "busybox" [9862e8c0-80da-453b-bbde-3a1488867c01] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9862e8c0-80da-453b-bbde-3a1488867c01] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00401814s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-451250 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-451250 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-451250 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058478805s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-451250 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-451250 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-451250 --alsologtostderr -v=3: (12.098942913s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451250 -n no-preload-451250
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451250 -n no-preload-451250: exit status 7 (72.431426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-451250 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-451250 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0814 01:20:04.496039  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:20:52.178431  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-451250 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m26.232698638s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451250 -n no-preload-451250
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lrv2s" [e78f14aa-181e-4f1c-955b-bf403300064c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004554517s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lrv2s" [e78f14aa-181e-4f1c-955b-bf403300064c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003888057s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-451250 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-451250 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-451250 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451250 -n no-preload-451250
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451250 -n no-preload-451250: exit status 2 (352.123991ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-451250 -n no-preload-451250
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-451250 -n no-preload-451250: exit status 2 (313.904923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-451250 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451250 -n no-preload-451250
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-451250 -n no-preload-451250
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-256618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-256618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m0.867948873s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ppglr" [b58c6068-3960-4da2-af09-1fc5491720d7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004872575s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ppglr" [b58c6068-3960-4da2-af09-1fc5491720d7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004211284s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-620816 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-620816 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-620816 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-620816 --alsologtostderr -v=1: (1.061292537s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-620816 -n old-k8s-version-620816
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-620816 -n old-k8s-version-620816: exit status 2 (404.350087ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-620816 -n old-k8s-version-620816
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-620816 -n old-k8s-version-620816: exit status 2 (403.066753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-620816 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-620816 --alsologtostderr -v=1: (1.029685258s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-620816 -n old-k8s-version-620816
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-620816 -n old-k8s-version-620816
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-141949 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-141949 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m31.939929137s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-256618 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f78b689c-71af-4527-a691-a8e2a892a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f78b689c-71af-4527-a691-a8e2a892a0e0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003786776s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-256618 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-256618 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-256618 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030800165s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-256618 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-256618 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-256618 --alsologtostderr -v=3: (12.006347184s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-256618 -n embed-certs-256618
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-256618 -n embed-certs-256618: exit status 7 (66.25782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-256618 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-256618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0814 01:25:04.496711  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-256618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m26.246559692s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-256618 -n embed-certs-256618
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-141949 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c3f302ce-5463-417f-84b6-6c4a923fd3f9] Pending
helpers_test.go:344: "busybox" [c3f302ce-5463-417f-84b6-6c4a923fd3f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c3f302ce-5463-417f-84b6-6c4a923fd3f9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003732458s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-141949 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-141949 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-141949 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-141949 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-141949 --alsologtostderr -v=3: (12.060675256s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-141949 -n default-k8s-diff-port-141949
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-141949 -n default-k8s-diff-port-141949: exit status 7 (76.25262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-141949 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-141949 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0814 01:25:52.178057  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:46.038925  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:46.045298  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:46.056764  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:46.078287  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:46.119735  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:46.201759  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:46.363256  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:46.685138  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:47.327479  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:48.609103  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:51.170840  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:26:56.292811  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:27:06.534606  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:27:27.016719  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:07.978603  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:18.163598  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:18.169926  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:18.181283  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:18.202780  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:18.244151  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:18.325599  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:18.487065  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:18.808812  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:19.450191  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:20.731739  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:23.293635  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:28.415001  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:38.656685  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:59.139028  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-141949 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m29.069707012s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-141949 -n default-k8s-diff-port-141949
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6nx89" [fc6e3beb-2d30-498d-bc18-60711d2fc907] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004038031s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6nx89" [fc6e3beb-2d30-498d-bc18-60711d2fc907] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0037588s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-256618 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-256618 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-256618 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-256618 -n embed-certs-256618
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-256618 -n embed-certs-256618: exit status 2 (301.981229ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-256618 -n embed-certs-256618
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-256618 -n embed-certs-256618: exit status 2 (320.7056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-256618 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-256618 -n embed-certs-256618
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-256618 -n embed-certs-256618
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-052514 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0814 01:29:40.100446  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:30:04.495907  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-052514 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (39.048127763s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-79kbf" [63d4b1e7-fca3-4ddf-a05d-743c8aef9cf2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00322211s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-052514 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-052514 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.26877353s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-052514 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-052514 --alsologtostderr -v=3: (1.306111627s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052514 -n newest-cni-052514
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052514 -n newest-cni-052514: exit status 7 (82.087176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-052514 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (23.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-052514 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-052514 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (22.222410446s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052514 -n newest-cni-052514
E0814 01:30:35.246552  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (23.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-79kbf" [63d4b1e7-fca3-4ddf-a05d-743c8aef9cf2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004115162s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-141949 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-141949 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-141949 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-141949 --alsologtostderr -v=1: (1.083308277s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-141949 -n default-k8s-diff-port-141949
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-141949 -n default-k8s-diff-port-141949: exit status 2 (432.82718ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-141949 -n default-k8s-diff-port-141949
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-141949 -n default-k8s-diff-port-141949: exit status 2 (482.277629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-141949 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-141949 --alsologtostderr -v=1: (1.216056963s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-141949 -n default-k8s-diff-port-141949
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-141949 -n default-k8s-diff-port-141949
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m28.455054976s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-052514 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-052514 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-052514 -n newest-cni-052514
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-052514 -n newest-cni-052514: exit status 2 (390.644587ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-052514 -n newest-cni-052514
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-052514 -n newest-cni-052514: exit status 2 (390.236114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-052514 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-052514 -n newest-cni-052514
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-052514 -n newest-cni-052514
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.57s)
E0814 01:35:57.469130  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:38.431300  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0814 01:30:52.177624  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/addons-785001/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:31:02.022016  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (58.680107492s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-shtv5" [c8c1c2c5-32a1-40ce-9110-6a54f11b85b2] Running
E0814 01:31:46.039242  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004806144s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-789888 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-789888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kt2xm" [a7ec007a-1e47-4a35-baad-e5d19ff97cd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kt2xm" [a7ec007a-1e47-4a35-baad-e5d19ff97cd3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00538408s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-789888 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-789888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4l6x9" [ff0277e2-15be-4863-8306-415fb908bfb4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4l6x9" [ff0277e2-15be-4863-8306-415fb908bfb4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004991676s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-789888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-789888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m13.730321094s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0814 01:33:18.163905  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.88653345s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-789888 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-789888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r6cqd" [141380ff-2249-4e32-b774-18fa42660241] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-r6cqd" [141380ff-2249-4e32-b774-18fa42660241] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004214024s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7xqw2" [048bc0c1-c92f-48de-835e-4ee4069de0b4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004399895s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-789888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-789888 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-789888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rnkft" [34430a02-bb36-4ef2-870b-186d78ed55e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rnkft" [34430a02-bb36-4ef2-870b-186d78ed55e0] Running
E0814 01:33:45.863467  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/no-preload-451250/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004712062s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-789888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m20.341623634s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0814 01:34:47.564213  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:35:04.496735  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/functional-519686/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.466495226s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7fwwr" [4603cb9d-19f0-4c8a-9d65-7af3a528627b] Running
E0814 01:35:16.485474  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:35:16.492617  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:35:16.504104  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:35:16.525665  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:35:16.567104  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:35:16.648528  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:35:16.810017  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:35:17.131486  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:35:17.773809  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:35:19.055915  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004911926s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-789888 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-789888 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-789888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kvq4s" [71c70e7f-bb16-4c92-901e-c30e876fb70e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kvq4s" [71c70e7f-bb16-4c92-901e-c30e876fb70e] Running
E0814 01:35:26.740653  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004238443s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-789888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7cgk8" [9990101c-2864-4edf-a5f7-fb2b9a4c7e2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0814 01:35:21.618557  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/default-k8s-diff-port-141949/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-7cgk8" [9990101c-2864-4edf-a5f7-fb2b9a4c7e2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003995231s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-789888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-789888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-789888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (42.486277471s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-789888 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-789888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h56qr" [c05a1560-4276-461c-931e-72e536d6e92b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0814 01:36:41.465229  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:41.471668  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:41.483095  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:41.504479  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:41.545857  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:41.627255  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:41.788749  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:42.110525  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:42.752055  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-h56qr" [c05a1560-4276-461c-931e-72e536d6e92b] Running
E0814 01:36:44.033497  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:46.039594  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/old-k8s-version-620816/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:46.595421  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003511707s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (25.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-789888 exec deployment/netcat -- nslookup kubernetes.default
E0814 01:36:51.717648  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:57.053266  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:57.059694  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:57.071226  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:57.092694  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:57.134177  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:57.215684  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:57.377190  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:57.699350  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:58.341147  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:36:59.622794  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:37:01.959263  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/kindnet-789888/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:37:02.185137  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-789888 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.179582155s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-789888 exec deployment/netcat -- nslookup kubernetes.default
E0814 01:37:07.306991  593008 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-587614/.minikube/profiles/auto-789888/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context bridge-789888 exec deployment/netcat -- nslookup kubernetes.default: (10.17141676s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (25.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-789888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-506752 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-506752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-506752
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-458940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-458940
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-789888 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-789888" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-789888

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789888"

                                                
                                                
----------------------- debugLogs end: kubenet-789888 [took: 4.186981369s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-789888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-789888
--- SKIP: TestNetworkPlugins/group/kubenet (4.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-789888 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-789888" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-789888

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-789888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789888"

                                                
                                                
----------------------- debugLogs end: cilium-789888 [took: 4.759330002s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-789888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-789888
--- SKIP: TestNetworkPlugins/group/cilium (4.94s)

                                                
                                    
Copied to clipboard