Test Report: Docker_Linux_containerd_arm64 19636

                    
                      a6feba20ebb4dc887776b248ea5c810d31cc7846:2024-09-13:36198
                    
                

Test fail (1/327)

Order failed test Duration
29 TestAddons/serial/Volcano 199.85
x
+
TestAddons/serial/Volcano (199.85s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 49.455449ms
addons_test.go:843: volcano-admission stabilized in 49.772479ms
addons_test.go:835: volcano-scheduler stabilized in 50.5439ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-mnc29" [b2e0791f-e956-4457-b58b-bbb49568a9e1] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00358557s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-6qlln" [4c8db678-82e7-443b-b226-e2a338ee8e00] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003941439s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-hzv8d" [e4176790-8b76-4f47-ad4d-01921ee37495] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003045431s
addons_test.go:870: (dbg) Run:  kubectl --context addons-365496 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-365496 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-365496 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5b553ea6-4469-42a7-a5ad-2340a7584390] Pending
helpers_test.go:344: "test-job-nginx-0" [5b553ea6-4469-42a7-a5ad-2340a7584390] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-365496 -n addons-365496
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-13 18:31:26.392432699 +0000 UTC m=+438.247101746
addons_test.go:902: (dbg) Run:  kubectl --context addons-365496 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-365496 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-e94bdfec-254f-409b-8cf3-146567ea4d18
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5c79 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-n5c79:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-365496 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-365496 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-365496
helpers_test.go:235: (dbg) docker inspect addons-365496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4bb1777e282eaa7e2c95b915afe33cedfe4e8635b0610b141f9fccde880dfad1",
	        "Created": "2024-09-13T18:24:54.348786948Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-13T18:24:54.487171136Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7fd83909ee30d45ee853480d01e762968b1b9847bff4690fcb8ae034ea6e4a6b",
	        "ResolvConfPath": "/var/lib/docker/containers/4bb1777e282eaa7e2c95b915afe33cedfe4e8635b0610b141f9fccde880dfad1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4bb1777e282eaa7e2c95b915afe33cedfe4e8635b0610b141f9fccde880dfad1/hostname",
	        "HostsPath": "/var/lib/docker/containers/4bb1777e282eaa7e2c95b915afe33cedfe4e8635b0610b141f9fccde880dfad1/hosts",
	        "LogPath": "/var/lib/docker/containers/4bb1777e282eaa7e2c95b915afe33cedfe4e8635b0610b141f9fccde880dfad1/4bb1777e282eaa7e2c95b915afe33cedfe4e8635b0610b141f9fccde880dfad1-json.log",
	        "Name": "/addons-365496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-365496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-365496",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fe027c633b889a1cfa37810f37d8bcef95d368b42aa44d0cca88fe8ae4aef031-init/diff:/var/lib/docker/overlay2/1e27a5e54f357010ba737f5c8a23d488564c0db127238c72cb46cd665c37659d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe027c633b889a1cfa37810f37d8bcef95d368b42aa44d0cca88fe8ae4aef031/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe027c633b889a1cfa37810f37d8bcef95d368b42aa44d0cca88fe8ae4aef031/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe027c633b889a1cfa37810f37d8bcef95d368b42aa44d0cca88fe8ae4aef031/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-365496",
	                "Source": "/var/lib/docker/volumes/addons-365496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-365496",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-365496",
	                "name.minikube.sigs.k8s.io": "addons-365496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d112789639439545eb7a1760b1fae05fa35f010203ce9fa04a6b792049580529",
	            "SandboxKey": "/var/run/docker/netns/d11278963943",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-365496": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "712a4426b60e20de12cae2da7c445b50e0520bf3f20a5f9a4e0756b771011227",
	                    "EndpointID": "591200d3112434a13d1783f150403f4a3e3aaa1d9c690fdc3700eafb6db646f5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-365496",
	                        "4bb1777e282e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-365496 -n addons-365496
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-365496 logs -n 25: (1.604900283s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-776826   | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC |                     |
	|         | -p download-only-776826              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	| delete  | -p download-only-776826              | download-only-776826   | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	| start   | -o=json --download-only              | download-only-021767   | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC |                     |
	|         | -p download-only-021767              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	| delete  | -p download-only-021767              | download-only-021767   | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	| delete  | -p download-only-776826              | download-only-776826   | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	| delete  | -p download-only-021767              | download-only-021767   | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	| start   | --download-only -p                   | download-docker-875542 | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC |                     |
	|         | download-docker-875542               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-875542            | download-docker-875542 | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	| start   | --download-only -p                   | binary-mirror-832553   | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC |                     |
	|         | binary-mirror-832553                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39563               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-832553              | binary-mirror-832553   | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	| addons  | disable dashboard -p                 | addons-365496          | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC |                     |
	|         | addons-365496                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-365496          | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC |                     |
	|         | addons-365496                        |                        |         |         |                     |                     |
	| start   | -p addons-365496 --wait=true         | addons-365496          | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:28 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:24:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:24:29.639225  300876 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:24:29.639460  300876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:24:29.639473  300876 out.go:358] Setting ErrFile to fd 2...
	I0913 18:24:29.639479  300876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:24:29.639755  300876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
	I0913 18:24:29.640310  300876 out.go:352] Setting JSON to false
	I0913 18:24:29.641294  300876 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7617,"bootTime":1726244253,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0913 18:24:29.641379  300876 start.go:139] virtualization:  
	I0913 18:24:29.643730  300876 out.go:177] * [addons-365496] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0913 18:24:29.645637  300876 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:24:29.645778  300876 notify.go:220] Checking for updates...
	I0913 18:24:29.649655  300876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:24:29.652281  300876 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	I0913 18:24:29.654977  300876 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	I0913 18:24:29.657184  300876 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0913 18:24:29.659099  300876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:24:29.661395  300876 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:24:29.689383  300876 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 18:24:29.689516  300876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:24:29.748655  300876 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-13 18:24:29.73901111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:24:29.748801  300876 docker.go:318] overlay module found
	I0913 18:24:29.752469  300876 out.go:177] * Using the docker driver based on user configuration
	I0913 18:24:29.754683  300876 start.go:297] selected driver: docker
	I0913 18:24:29.754709  300876 start.go:901] validating driver "docker" against <nil>
	I0913 18:24:29.754725  300876 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:24:29.755371  300876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:24:29.806312  300876 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-13 18:24:29.79658602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:24:29.806527  300876 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:24:29.806760  300876 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:24:29.809531  300876 out.go:177] * Using Docker driver with root privileges
	I0913 18:24:29.811912  300876 cni.go:84] Creating CNI manager for ""
	I0913 18:24:29.811978  300876 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0913 18:24:29.811999  300876 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 18:24:29.812091  300876 start.go:340] cluster config:
	{Name:addons-365496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-365496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:24:29.815341  300876 out.go:177] * Starting "addons-365496" primary control-plane node in "addons-365496" cluster
	I0913 18:24:29.817436  300876 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0913 18:24:29.819627  300876 out.go:177] * Pulling base image v0.0.45-1726193793-19634 ...
	I0913 18:24:29.822617  300876 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0913 18:24:29.822668  300876 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-294721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0913 18:24:29.822681  300876 cache.go:56] Caching tarball of preloaded images
	I0913 18:24:29.822724  300876 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local docker daemon
	I0913 18:24:29.822783  300876 preload.go:172] Found /home/jenkins/minikube-integration/19636-294721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 18:24:29.822796  300876 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0913 18:24:29.823176  300876 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/config.json ...
	I0913 18:24:29.823257  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/config.json: {Name:mk2df2a23078c7d167e3c0fb72a0cce538d9b578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:24:29.843254  300876 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e to local cache
	I0913 18:24:29.843396  300876 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local cache directory
	I0913 18:24:29.843420  300876 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local cache directory, skipping pull
	I0913 18:24:29.843425  300876 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e exists in cache, skipping pull
	I0913 18:24:29.843434  300876 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e as a tarball
	I0913 18:24:29.843443  300876 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e from local cache
	I0913 18:24:47.371784  300876 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e from cached tarball
	I0913 18:24:47.371834  300876 cache.go:194] Successfully downloaded all kic artifacts
	I0913 18:24:47.371891  300876 start.go:360] acquireMachinesLock for addons-365496: {Name:mk749daf7a7766074f9c763c16ee6bf077a81cf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:24:47.372022  300876 start.go:364] duration metric: took 104.615µs to acquireMachinesLock for "addons-365496"
	I0913 18:24:47.372057  300876 start.go:93] Provisioning new machine with config: &{Name:addons-365496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-365496 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0913 18:24:47.372135  300876 start.go:125] createHost starting for "" (driver="docker")
	I0913 18:24:47.374970  300876 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0913 18:24:47.375242  300876 start.go:159] libmachine.API.Create for "addons-365496" (driver="docker")
	I0913 18:24:47.375287  300876 client.go:168] LocalClient.Create starting
	I0913 18:24:47.375394  300876 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19636-294721/.minikube/certs/ca.pem
	I0913 18:24:47.572358  300876 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19636-294721/.minikube/certs/cert.pem
	I0913 18:24:47.927712  300876 cli_runner.go:164] Run: docker network inspect addons-365496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0913 18:24:47.948313  300876 cli_runner.go:211] docker network inspect addons-365496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0913 18:24:47.948415  300876 network_create.go:284] running [docker network inspect addons-365496] to gather additional debugging logs...
	I0913 18:24:47.948435  300876 cli_runner.go:164] Run: docker network inspect addons-365496
	W0913 18:24:47.963453  300876 cli_runner.go:211] docker network inspect addons-365496 returned with exit code 1
	I0913 18:24:47.963485  300876 network_create.go:287] error running [docker network inspect addons-365496]: docker network inspect addons-365496: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-365496 not found
	I0913 18:24:47.963500  300876 network_create.go:289] output of [docker network inspect addons-365496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-365496 not found
	
	** /stderr **
	I0913 18:24:47.963604  300876 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0913 18:24:47.980319  300876 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b3efe0}
	I0913 18:24:47.980363  300876 network_create.go:124] attempt to create docker network addons-365496 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0913 18:24:47.980420  300876 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-365496 addons-365496
	I0913 18:24:48.064109  300876 network_create.go:108] docker network addons-365496 192.168.49.0/24 created
	I0913 18:24:48.064143  300876 kic.go:121] calculated static IP "192.168.49.2" for the "addons-365496" container
	I0913 18:24:48.064228  300876 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0913 18:24:48.079914  300876 cli_runner.go:164] Run: docker volume create addons-365496 --label name.minikube.sigs.k8s.io=addons-365496 --label created_by.minikube.sigs.k8s.io=true
	I0913 18:24:48.098320  300876 oci.go:103] Successfully created a docker volume addons-365496
	I0913 18:24:48.098426  300876 cli_runner.go:164] Run: docker run --rm --name addons-365496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-365496 --entrypoint /usr/bin/test -v addons-365496:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e -d /var/lib
	I0913 18:24:50.212177  300876 cli_runner.go:217] Completed: docker run --rm --name addons-365496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-365496 --entrypoint /usr/bin/test -v addons-365496:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e -d /var/lib: (2.113685151s)
	I0913 18:24:50.212209  300876 oci.go:107] Successfully prepared a docker volume addons-365496
	I0913 18:24:50.212237  300876 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0913 18:24:50.212258  300876 kic.go:194] Starting extracting preloaded images to volume ...
	I0913 18:24:50.212331  300876 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19636-294721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-365496:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e -I lz4 -xf /preloaded.tar -C /extractDir
	I0913 18:24:54.284557  300876 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19636-294721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-365496:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e -I lz4 -xf /preloaded.tar -C /extractDir: (4.072180362s)
	I0913 18:24:54.284591  300876 kic.go:203] duration metric: took 4.072330548s to extract preloaded images to volume ...
	W0913 18:24:54.284730  300876 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0913 18:24:54.284848  300876 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0913 18:24:54.335096  300876 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-365496 --name addons-365496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-365496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-365496 --network addons-365496 --ip 192.168.49.2 --volume addons-365496:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e
	I0913 18:24:54.655201  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Running}}
	I0913 18:24:54.678468  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:24:54.701107  300876 cli_runner.go:164] Run: docker exec addons-365496 stat /var/lib/dpkg/alternatives/iptables
	I0913 18:24:54.765481  300876 oci.go:144] the created container "addons-365496" has a running status.
	I0913 18:24:54.765509  300876 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa...
	I0913 18:24:55.245662  300876 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0913 18:24:55.274930  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:24:55.300708  300876 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0913 18:24:55.300733  300876 kic_runner.go:114] Args: [docker exec --privileged addons-365496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0913 18:24:55.382937  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:24:55.404071  300876 machine.go:93] provisionDockerMachine start ...
	I0913 18:24:55.404174  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:24:55.426305  300876 main.go:141] libmachine: Using SSH client type: native
	I0913 18:24:55.426577  300876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0913 18:24:55.426587  300876 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 18:24:55.595601  300876 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-365496
	
	I0913 18:24:55.595685  300876 ubuntu.go:169] provisioning hostname "addons-365496"
	I0913 18:24:55.595778  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:24:55.612839  300876 main.go:141] libmachine: Using SSH client type: native
	I0913 18:24:55.613071  300876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0913 18:24:55.613082  300876 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-365496 && echo "addons-365496" | sudo tee /etc/hostname
	I0913 18:24:55.775089  300876 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-365496
	
	I0913 18:24:55.775178  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:24:55.792639  300876 main.go:141] libmachine: Using SSH client type: native
	I0913 18:24:55.792891  300876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0913 18:24:55.792916  300876 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-365496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-365496/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-365496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:24:55.940405  300876 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:24:55.940499  300876 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19636-294721/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-294721/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-294721/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-294721/.minikube}
	I0913 18:24:55.940553  300876 ubuntu.go:177] setting up certificates
	I0913 18:24:55.940584  300876 provision.go:84] configureAuth start
	I0913 18:24:55.940684  300876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-365496
	I0913 18:24:55.956932  300876 provision.go:143] copyHostCerts
	I0913 18:24:55.957017  300876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-294721/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-294721/.minikube/ca.pem (1078 bytes)
	I0913 18:24:55.957144  300876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-294721/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-294721/.minikube/cert.pem (1123 bytes)
	I0913 18:24:55.957213  300876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-294721/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-294721/.minikube/key.pem (1679 bytes)
	I0913 18:24:55.957282  300876 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-294721/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-294721/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-294721/.minikube/certs/ca-key.pem org=jenkins.addons-365496 san=[127.0.0.1 192.168.49.2 addons-365496 localhost minikube]
	I0913 18:24:56.484797  300876 provision.go:177] copyRemoteCerts
	I0913 18:24:56.484888  300876 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:24:56.484946  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:24:56.505234  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:24:56.608771  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 18:24:56.633191  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 18:24:56.657292  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 18:24:56.682473  300876 provision.go:87] duration metric: took 741.84889ms to configureAuth
	I0913 18:24:56.682502  300876 ubuntu.go:193] setting minikube options for container-runtime
	I0913 18:24:56.682702  300876 config.go:182] Loaded profile config "addons-365496": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0913 18:24:56.682710  300876 machine.go:96] duration metric: took 1.278614475s to provisionDockerMachine
	I0913 18:24:56.682717  300876 client.go:171] duration metric: took 9.307420337s to LocalClient.Create
	I0913 18:24:56.682741  300876 start.go:167] duration metric: took 9.307500001s to libmachine.API.Create "addons-365496"
	I0913 18:24:56.682750  300876 start.go:293] postStartSetup for "addons-365496" (driver="docker")
	I0913 18:24:56.682759  300876 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:24:56.682813  300876 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:24:56.682860  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:24:56.699985  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:24:56.801140  300876 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:24:56.804507  300876 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0913 18:24:56.804545  300876 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0913 18:24:56.804559  300876 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0913 18:24:56.804567  300876 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0913 18:24:56.804578  300876 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-294721/.minikube/addons for local assets ...
	I0913 18:24:56.804651  300876 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-294721/.minikube/files for local assets ...
	I0913 18:24:56.804680  300876 start.go:296] duration metric: took 121.92474ms for postStartSetup
	I0913 18:24:56.804994  300876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-365496
	I0913 18:24:56.824091  300876 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/config.json ...
	I0913 18:24:56.824402  300876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:24:56.824456  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:24:56.840746  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:24:56.936801  300876 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0913 18:24:56.941357  300876 start.go:128] duration metric: took 9.569204693s to createHost
	I0913 18:24:56.941380  300876 start.go:83] releasing machines lock for "addons-365496", held for 9.569342039s
	I0913 18:24:56.941453  300876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-365496
	I0913 18:24:56.957462  300876 ssh_runner.go:195] Run: cat /version.json
	I0913 18:24:56.957496  300876 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:24:56.957516  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:24:56.957573  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:24:56.974627  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:24:56.984210  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:24:57.206802  300876 ssh_runner.go:195] Run: systemctl --version
	I0913 18:24:57.211239  300876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0913 18:24:57.215463  300876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0913 18:24:57.242027  300876 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0913 18:24:57.242137  300876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:24:57.270934  300876 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0913 18:24:57.271002  300876 start.go:495] detecting cgroup driver to use...
	I0913 18:24:57.271052  300876 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 18:24:57.271152  300876 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0913 18:24:57.283708  300876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 18:24:57.295150  300876 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:24:57.295288  300876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:24:57.309805  300876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:24:57.324533  300876 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:24:57.403788  300876 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:24:57.495618  300876 docker.go:233] disabling docker service ...
	I0913 18:24:57.495707  300876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:24:57.515636  300876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:24:57.528057  300876 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:24:57.613913  300876 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:24:57.717121  300876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:24:57.729300  300876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:24:57.745549  300876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0913 18:24:57.755645  300876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 18:24:57.769306  300876 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 18:24:57.769431  300876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 18:24:57.779691  300876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 18:24:57.790378  300876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 18:24:57.800645  300876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 18:24:57.810834  300876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:24:57.819929  300876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 18:24:57.829528  300876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 18:24:57.839486  300876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 18:24:57.849873  300876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:24:57.858762  300876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:24:57.867394  300876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:24:57.955490  300876 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 18:24:58.100598  300876 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0913 18:24:58.100694  300876 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0913 18:24:58.105062  300876 start.go:563] Will wait 60s for crictl version
	I0913 18:24:58.105132  300876 ssh_runner.go:195] Run: which crictl
	I0913 18:24:58.108808  300876 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:24:58.145398  300876 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0913 18:24:58.145479  300876 ssh_runner.go:195] Run: containerd --version
	I0913 18:24:58.167780  300876 ssh_runner.go:195] Run: containerd --version
	I0913 18:24:58.192508  300876 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0913 18:24:58.194548  300876 cli_runner.go:164] Run: docker network inspect addons-365496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0913 18:24:58.208910  300876 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0913 18:24:58.212595  300876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:24:58.223196  300876 kubeadm.go:883] updating cluster {Name:addons-365496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-365496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:24:58.223354  300876 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0913 18:24:58.223417  300876 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:24:58.258802  300876 containerd.go:627] all images are preloaded for containerd runtime.
	I0913 18:24:58.258825  300876 containerd.go:534] Images already preloaded, skipping extraction
	I0913 18:24:58.258883  300876 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:24:58.294335  300876 containerd.go:627] all images are preloaded for containerd runtime.
	I0913 18:24:58.294357  300876 cache_images.go:84] Images are preloaded, skipping loading
	I0913 18:24:58.294365  300876 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0913 18:24:58.294465  300876 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-365496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-365496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:24:58.294536  300876 ssh_runner.go:195] Run: sudo crictl info
	I0913 18:24:58.330527  300876 cni.go:84] Creating CNI manager for ""
	I0913 18:24:58.330553  300876 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0913 18:24:58.330564  300876 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:24:58.330586  300876 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-365496 NodeName:addons-365496 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:24:58.330720  300876 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-365496"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:24:58.330796  300876 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:24:58.339651  300876 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 18:24:58.339722  300876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 18:24:58.348465  300876 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0913 18:24:58.366974  300876 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:24:58.385747  300876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0913 18:24:58.404526  300876 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0913 18:24:58.407932  300876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:24:58.419052  300876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:24:58.495375  300876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:24:58.511026  300876 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496 for IP: 192.168.49.2
	I0913 18:24:58.511090  300876 certs.go:194] generating shared ca certs ...
	I0913 18:24:58.511123  300876 certs.go:226] acquiring lock for ca certs: {Name:mkcfea799d72a2f680b36929e56fe310238b284d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:24:58.511293  300876 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-294721/.minikube/ca.key
	I0913 18:24:59.035186  300876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-294721/.minikube/ca.crt ...
	I0913 18:24:59.035227  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/ca.crt: {Name:mkfa39b4c69758910a16362024f6353f30d0f7c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:24:59.035439  300876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-294721/.minikube/ca.key ...
	I0913 18:24:59.035454  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/ca.key: {Name:mk2fd87d66df25022b5f2efdd2eb2d305c50e85f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:24:59.035930  300876 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-294721/.minikube/proxy-client-ca.key
	I0913 18:24:59.563387  300876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-294721/.minikube/proxy-client-ca.crt ...
	I0913 18:24:59.563421  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/proxy-client-ca.crt: {Name:mk9728107ab9c42757b9c6da463fd37adf684a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:24:59.564022  300876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-294721/.minikube/proxy-client-ca.key ...
	I0913 18:24:59.564038  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/proxy-client-ca.key: {Name:mk9c15f225129c8039f14b758d05d6288fefbf97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:24:59.564498  300876 certs.go:256] generating profile certs ...
	I0913 18:24:59.564566  300876 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.key
	I0913 18:24:59.564584  300876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt with IP's: []
	I0913 18:24:59.752949  300876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt ...
	I0913 18:24:59.752982  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: {Name:mka0c8e1b7147e05724f14b1f392b48065f0281d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:24:59.753175  300876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.key ...
	I0913 18:24:59.753188  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.key: {Name:mk5134347a6fabeffadaf4d463f513bce01cbc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:24:59.753278  300876 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.key.5eae004d
	I0913 18:24:59.753298  300876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.crt.5eae004d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0913 18:25:00.103510  300876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.crt.5eae004d ...
	I0913 18:25:00.103546  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.crt.5eae004d: {Name:mk0c01d8342fd876e927c87f5db8ee494908f8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:25:00.103841  300876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.key.5eae004d ...
	I0913 18:25:00.103864  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.key.5eae004d: {Name:mk046f2ee84c2bc239a938e108ef5185102b452b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:25:00.103986  300876 certs.go:381] copying /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.crt.5eae004d -> /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.crt
	I0913 18:25:00.104079  300876 certs.go:385] copying /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.key.5eae004d -> /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.key
	I0913 18:25:00.104130  300876 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/proxy-client.key
	I0913 18:25:00.104148  300876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/proxy-client.crt with IP's: []
	I0913 18:25:00.505130  300876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/proxy-client.crt ...
	I0913 18:25:00.505167  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/proxy-client.crt: {Name:mkab12d195b2f18510f1918935ba15f03fd455c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:25:00.511518  300876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/proxy-client.key ...
	I0913 18:25:00.511558  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/proxy-client.key: {Name:mk214148346bca35e5232dce22b3c50a993b68fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:25:00.511805  300876 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-294721/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:25:00.511885  300876 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-294721/.minikube/certs/ca.pem (1078 bytes)
	I0913 18:25:00.511919  300876 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-294721/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:25:00.511952  300876 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-294721/.minikube/certs/key.pem (1679 bytes)
	I0913 18:25:00.512669  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:25:00.542728  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 18:25:00.573321  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:25:00.600303  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 18:25:00.627125  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 18:25:00.653679  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 18:25:00.679907  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:25:00.706362  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 18:25:00.732549  300876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-294721/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:25:00.760189  300876 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:25:00.780115  300876 ssh_runner.go:195] Run: openssl version
	I0913 18:25:00.786339  300876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:25:00.796341  300876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:25:00.800316  300876 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:24 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:25:00.800418  300876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:25:00.808238  300876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:25:00.818588  300876 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:25:00.822216  300876 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:25:00.822267  300876 kubeadm.go:392] StartCluster: {Name:addons-365496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-365496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:25:00.822356  300876 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0913 18:25:00.822422  300876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 18:25:00.861232  300876 cri.go:89] found id: ""
	I0913 18:25:00.861307  300876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 18:25:00.870499  300876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 18:25:00.879805  300876 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0913 18:25:00.879963  300876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 18:25:00.889845  300876 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 18:25:00.889868  300876 kubeadm.go:157] found existing configuration files:
	
	I0913 18:25:00.889942  300876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 18:25:00.899434  300876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 18:25:00.899508  300876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 18:25:00.908551  300876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 18:25:00.917826  300876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 18:25:00.917916  300876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 18:25:00.926539  300876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 18:25:00.935739  300876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 18:25:00.935833  300876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 18:25:00.944643  300876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 18:25:00.954401  300876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 18:25:00.954475  300876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 18:25:00.963591  300876 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0913 18:25:01.007354  300876 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 18:25:01.007419  300876 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 18:25:01.038248  300876 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0913 18:25:01.038327  300876 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0913 18:25:01.038368  300876 kubeadm.go:310] OS: Linux
	I0913 18:25:01.038419  300876 kubeadm.go:310] CGROUPS_CPU: enabled
	I0913 18:25:01.038471  300876 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0913 18:25:01.038523  300876 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0913 18:25:01.038575  300876 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0913 18:25:01.038629  300876 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0913 18:25:01.038686  300876 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0913 18:25:01.038735  300876 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0913 18:25:01.038784  300876 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0913 18:25:01.038834  300876 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0913 18:25:01.119028  300876 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 18:25:01.119145  300876 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 18:25:01.119262  300876 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 18:25:01.128277  300876 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 18:25:01.131253  300876 out.go:235]   - Generating certificates and keys ...
	I0913 18:25:01.131399  300876 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 18:25:01.131483  300876 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 18:25:01.526009  300876 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 18:25:02.049016  300876 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 18:25:02.919499  300876 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 18:25:03.053637  300876 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 18:25:03.640156  300876 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 18:25:03.640479  300876 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-365496 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0913 18:25:03.982538  300876 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 18:25:03.982864  300876 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-365496 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0913 18:25:04.637803  300876 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 18:25:05.321823  300876 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 18:25:06.187331  300876 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 18:25:06.187659  300876 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 18:25:06.793805  300876 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 18:25:07.641536  300876 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 18:25:09.047296  300876 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 18:25:10.234052  300876 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 18:25:10.898995  300876 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 18:25:10.901867  300876 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 18:25:10.904744  300876 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 18:25:10.906866  300876 out.go:235]   - Booting up control plane ...
	I0913 18:25:10.906974  300876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 18:25:10.907051  300876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 18:25:10.907798  300876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 18:25:10.920025  300876 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 18:25:10.926500  300876 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 18:25:10.926868  300876 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 18:25:11.030335  300876 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 18:25:11.030478  300876 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 18:25:12.534076  300876 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.50394154s
	I0913 18:25:12.534165  300876 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 18:25:19.036479  300876 kubeadm.go:310] [api-check] The API server is healthy after 6.502220303s
	I0913 18:25:19.062169  300876 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 18:25:19.076042  300876 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 18:25:19.100310  300876 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 18:25:19.100532  300876 kubeadm.go:310] [mark-control-plane] Marking the node addons-365496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 18:25:19.110811  300876 kubeadm.go:310] [bootstrap-token] Using token: r3yeef.060crf9oxzvbcg7z
	I0913 18:25:19.112918  300876 out.go:235]   - Configuring RBAC rules ...
	I0913 18:25:19.113050  300876 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 18:25:19.117763  300876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 18:25:19.125668  300876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 18:25:19.129524  300876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 18:25:19.134911  300876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 18:25:19.138720  300876 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 18:25:19.449341  300876 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 18:25:19.882855  300876 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 18:25:20.449519  300876 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 18:25:20.450606  300876 kubeadm.go:310] 
	I0913 18:25:20.450682  300876 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 18:25:20.450688  300876 kubeadm.go:310] 
	I0913 18:25:20.450781  300876 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 18:25:20.450786  300876 kubeadm.go:310] 
	I0913 18:25:20.450811  300876 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 18:25:20.450870  300876 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 18:25:20.450920  300876 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 18:25:20.450925  300876 kubeadm.go:310] 
	I0913 18:25:20.450978  300876 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 18:25:20.450982  300876 kubeadm.go:310] 
	I0913 18:25:20.451030  300876 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 18:25:20.451035  300876 kubeadm.go:310] 
	I0913 18:25:20.451086  300876 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 18:25:20.451160  300876 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 18:25:20.451228  300876 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 18:25:20.451232  300876 kubeadm.go:310] 
	I0913 18:25:20.451341  300876 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 18:25:20.451419  300876 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 18:25:20.451423  300876 kubeadm.go:310] 
	I0913 18:25:20.451507  300876 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r3yeef.060crf9oxzvbcg7z \
	I0913 18:25:20.451608  300876 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13941826923f22ac0db2edd39cf0a3b801bee2d7b6f854537f110b8d070e63eb \
	I0913 18:25:20.451631  300876 kubeadm.go:310] 	--control-plane 
	I0913 18:25:20.451635  300876 kubeadm.go:310] 
	I0913 18:25:20.451718  300876 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 18:25:20.451723  300876 kubeadm.go:310] 
	I0913 18:25:20.451803  300876 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r3yeef.060crf9oxzvbcg7z \
	I0913 18:25:20.451932  300876 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13941826923f22ac0db2edd39cf0a3b801bee2d7b6f854537f110b8d070e63eb 
	I0913 18:25:20.454537  300876 kubeadm.go:310] W0913 18:25:01.003911    1012 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:25:20.454947  300876 kubeadm.go:310] W0913 18:25:01.004812    1012 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:25:20.455238  300876 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0913 18:25:20.455399  300876 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 18:25:20.455426  300876 cni.go:84] Creating CNI manager for ""
	I0913 18:25:20.455437  300876 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0913 18:25:20.459080  300876 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0913 18:25:20.460975  300876 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0913 18:25:20.465605  300876 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0913 18:25:20.465626  300876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0913 18:25:20.484731  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0913 18:25:20.773769  300876 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 18:25:20.773903  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:25:20.773953  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-365496 minikube.k8s.io/updated_at=2024_09_13T18_25_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=addons-365496 minikube.k8s.io/primary=true
	I0913 18:25:20.787318  300876 ops.go:34] apiserver oom_adj: -16
	I0913 18:25:20.988940  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:25:21.489724  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:25:21.990018  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:25:22.489274  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:25:22.989786  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:25:23.489950  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:25:23.989590  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:25:24.489849  300876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:25:24.607506  300876 kubeadm.go:1113] duration metric: took 3.833651264s to wait for elevateKubeSystemPrivileges
	I0913 18:25:24.607535  300876 kubeadm.go:394] duration metric: took 23.785273092s to StartCluster
	I0913 18:25:24.607553  300876 settings.go:142] acquiring lock: {Name:mk333e8204c78b81aa3f1acc0bed9e0be37c938b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:25:24.608063  300876 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-294721/kubeconfig
	I0913 18:25:24.608461  300876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/kubeconfig: {Name:mkda3b252234a454ce87630794d60843f992d9e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:25:24.609016  300876 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0913 18:25:24.609246  300876 config.go:182] Loaded profile config "addons-365496": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0913 18:25:24.609277  300876 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 18:25:24.609371  300876 addons.go:69] Setting yakd=true in profile "addons-365496"
	I0913 18:25:24.609384  300876 addons.go:234] Setting addon yakd=true in "addons-365496"
	I0913 18:25:24.609408  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.609864  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.609053  300876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 18:25:24.610395  300876 addons.go:69] Setting cloud-spanner=true in profile "addons-365496"
	I0913 18:25:24.610421  300876 addons.go:234] Setting addon cloud-spanner=true in "addons-365496"
	I0913 18:25:24.610448  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.610954  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.612236  300876 out.go:177] * Verifying Kubernetes components...
	I0913 18:25:24.612565  300876 addons.go:69] Setting registry=true in profile "addons-365496"
	I0913 18:25:24.612604  300876 addons.go:234] Setting addon registry=true in "addons-365496"
	I0913 18:25:24.612661  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.613999  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.614504  300876 addons.go:69] Setting storage-provisioner=true in profile "addons-365496"
	I0913 18:25:24.614558  300876 addons.go:234] Setting addon storage-provisioner=true in "addons-365496"
	I0913 18:25:24.614602  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.616464  300876 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-365496"
	I0913 18:25:24.616520  300876 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-365496"
	I0913 18:25:24.616551  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.616975  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.615936  300876 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-365496"
	I0913 18:25:24.617448  300876 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-365496"
	I0913 18:25:24.617766  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.615962  300876 addons.go:69] Setting volcano=true in profile "addons-365496"
	I0913 18:25:24.630808  300876 addons.go:234] Setting addon volcano=true in "addons-365496"
	I0913 18:25:24.644108  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.644761  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.630871  300876 addons.go:69] Setting default-storageclass=true in profile "addons-365496"
	I0913 18:25:24.653230  300876 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-365496"
	I0913 18:25:24.653709  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.630892  300876 addons.go:69] Setting gcp-auth=true in profile "addons-365496"
	I0913 18:25:24.668140  300876 mustload.go:65] Loading cluster: addons-365496
	I0913 18:25:24.668344  300876 config.go:182] Loaded profile config "addons-365496": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0913 18:25:24.668605  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.630899  300876 addons.go:69] Setting ingress=true in profile "addons-365496"
	I0913 18:25:24.680082  300876 addons.go:234] Setting addon ingress=true in "addons-365496"
	I0913 18:25:24.680149  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.680641  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.630907  300876 addons.go:69] Setting ingress-dns=true in profile "addons-365496"
	I0913 18:25:24.692805  300876 addons.go:234] Setting addon ingress-dns=true in "addons-365496"
	I0913 18:25:24.692940  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.693557  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.694064  300876 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 18:25:24.703325  300876 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 18:25:24.703677  300876 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 18:25:24.703725  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 18:25:24.703822  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:24.630914  300876 addons.go:69] Setting inspektor-gadget=true in profile "addons-365496"
	I0913 18:25:24.704499  300876 addons.go:234] Setting addon inspektor-gadget=true in "addons-365496"
	I0913 18:25:24.704560  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.705215  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.708903  300876 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 18:25:24.708966  300876 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 18:25:24.709056  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:24.630921  300876 addons.go:69] Setting metrics-server=true in profile "addons-365496"
	I0913 18:25:24.764436  300876 addons.go:234] Setting addon metrics-server=true in "addons-365496"
	I0913 18:25:24.764491  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.764984  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.630929  300876 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-365496"
	I0913 18:25:24.784111  300876 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-365496"
	I0913 18:25:24.784153  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.784619  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.631417  300876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:25:24.631807  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.804848  300876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 18:25:24.810026  300876 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-365496"
	I0913 18:25:24.810078  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.810515  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.615972  300876 addons.go:69] Setting volumesnapshots=true in profile "addons-365496"
	I0913 18:25:24.815138  300876 addons.go:234] Setting addon volumesnapshots=true in "addons-365496"
	I0913 18:25:24.815180  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.815662  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.836288  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.837338  300876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 18:25:24.844258  300876 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0913 18:25:24.846489  300876 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 18:25:24.838632  300876 addons.go:234] Setting addon default-storageclass=true in "addons-365496"
	I0913 18:25:24.871456  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:24.887504  300876 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0913 18:25:24.887715  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:24.891987  300876 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 18:25:24.892322  300876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 18:25:24.892796  300876 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 18:25:24.893276  300876 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 18:25:24.893612  300876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 18:25:24.918740  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:24.919458  300876 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0913 18:25:24.919446  300876 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 18:25:24.919562  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 18:25:24.920063  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:24.924111  300876 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:25:24.928755  300876 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 18:25:24.931047  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0913 18:25:24.931172  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:24.945416  300876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 18:25:24.947259  300876 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 18:25:24.947605  300876 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:25:24.947622  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 18:25:24.947684  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:24.950849  300876 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 18:25:24.952697  300876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 18:25:24.955229  300876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 18:25:24.957087  300876 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:25:24.957191  300876 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 18:25:24.957206  300876 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 18:25:24.957274  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:24.990936  300876 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:25:24.990958  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 18:25:24.991023  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:25.002992  300876 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 18:25:25.003357  300876 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 18:25:25.003076  300876 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 18:25:25.003174  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.014146  300876 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 18:25:25.016774  300876 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 18:25:25.017160  300876 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:25:25.017478  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 18:25:25.017639  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:25.030878  300876 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:25:25.030902  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 18:25:25.030985  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:25.017187  300876 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 18:25:25.032148  300876 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 18:25:25.032229  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:25.017194  300876 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 18:25:25.068038  300876 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 18:25:25.068114  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:25.084154  300876 out.go:177]   - Using image docker.io/busybox:stable
	I0913 18:25:25.086611  300876 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:25:25.086631  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 18:25:25.086695  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:25.090212  300876 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 18:25:25.092219  300876 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 18:25:25.092253  300876 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 18:25:25.092335  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:25.125798  300876 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 18:25:25.125821  300876 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 18:25:25.125889  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:25.130137  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.130613  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.146034  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.167025  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.220557  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.221948  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.224969  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.233237  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.239289  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.247821  300876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:25:25.249094  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.272097  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:25.274417  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	W0913 18:25:25.275922  300876 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0913 18:25:25.275950  300876 retry.go:31] will retry after 181.749268ms: ssh: handshake failed: EOF
	I0913 18:25:25.468533  300876 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 18:25:25.468611  300876 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 18:25:25.590330  300876 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 18:25:25.590405  300876 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 18:25:25.637192  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 18:25:25.745266  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:25:25.787573  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:25:25.811985  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:25:25.844939  300876 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 18:25:25.844964  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 18:25:25.851737  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:25:25.898837  300876 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 18:25:25.898864  300876 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 18:25:25.922777  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:25:25.933280  300876 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 18:25:25.933307  300876 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 18:25:25.971084  300876 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 18:25:25.971111  300876 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 18:25:25.973678  300876 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 18:25:25.973702  300876 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 18:25:26.014811  300876 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 18:25:26.014843  300876 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 18:25:26.051802  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 18:25:26.061319  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 18:25:26.248703  300876 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 18:25:26.248731  300876 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 18:25:26.303487  300876 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 18:25:26.303568  300876 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 18:25:26.316440  300876 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:25:26.316508  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 18:25:26.330039  300876 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 18:25:26.330119  300876 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 18:25:26.394657  300876 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:25:26.394732  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 18:25:26.409029  300876 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 18:25:26.409104  300876 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 18:25:26.550865  300876 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:25:26.550937  300876 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 18:25:26.656592  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:25:26.686734  300876 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 18:25:26.686797  300876 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 18:25:26.688903  300876 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 18:25:26.688972  300876 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 18:25:26.708195  300876 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 18:25:26.708268  300876 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 18:25:26.756790  300876 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 18:25:26.756870  300876 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 18:25:26.822559  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:25:26.891175  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:25:26.961494  300876 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 18:25:26.961568  300876 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 18:25:27.078344  300876 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 18:25:27.078422  300876 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 18:25:27.114236  300876 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:25:27.114308  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 18:25:27.242771  300876 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 18:25:27.242847  300876 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 18:25:27.256494  300876 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 18:25:27.256568  300876 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 18:25:27.329819  300876 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.411343557s)
	I0913 18:25:27.329846  300876 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0913 18:25:27.330836  300876 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.082963437s)
	I0913 18:25:27.331535  300876 node_ready.go:35] waiting up to 6m0s for node "addons-365496" to be "Ready" ...
	I0913 18:25:27.335076  300876 node_ready.go:49] node "addons-365496" has status "Ready":"True"
	I0913 18:25:27.335147  300876 node_ready.go:38] duration metric: took 3.593366ms for node "addons-365496" to be "Ready" ...
	I0913 18:25:27.335171  300876 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:25:27.348826  300876 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-88c7r" in "kube-system" namespace to be "Ready" ...
	I0913 18:25:27.375590  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:25:27.666616  300876 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 18:25:27.666689  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 18:25:27.676881  300876 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 18:25:27.676944  300876 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 18:25:27.847470  300876 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-365496" context rescaled to 1 replicas
	I0913 18:25:27.870455  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.233165181s)
	I0913 18:25:27.962733  300876 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:25:27.962803  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 18:25:27.977203  300876 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 18:25:27.977277  300876 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 18:25:28.116971  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:25:28.153155  300876 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 18:25:28.153225  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 18:25:28.335197  300876 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 18:25:28.335277  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 18:25:28.351948  300876 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-88c7r" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-88c7r" not found
	I0913 18:25:28.352022  300876 pod_ready.go:82] duration metric: took 1.003126016s for pod "coredns-7c65d6cfc9-88c7r" in "kube-system" namespace to be "Ready" ...
	E0913 18:25:28.352049  300876 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-88c7r" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-88c7r" not found
	I0913 18:25:28.352077  300876 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace to be "Ready" ...
	I0913 18:25:28.609954  300876 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:25:28.610028  300876 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 18:25:28.883588  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:25:29.423563  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.678242826s)
	I0913 18:25:29.985967  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.134206918s)
	I0913 18:25:29.986070  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.198474583s)
	I0913 18:25:29.985890  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.173869952s)
	I0913 18:25:30.364750  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:32.095516  300876 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 18:25:32.095657  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:32.120742  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:32.437625  300876 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 18:25:32.593591  300876 addons.go:234] Setting addon gcp-auth=true in "addons-365496"
	I0913 18:25:32.593644  300876 host.go:66] Checking if "addons-365496" exists ...
	I0913 18:25:32.594117  300876 cli_runner.go:164] Run: docker container inspect addons-365496 --format={{.State.Status}}
	I0913 18:25:32.618230  300876 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 18:25:32.618285  300876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-365496
	I0913 18:25:32.659706  300876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/addons-365496/id_rsa Username:docker}
	I0913 18:25:32.858526  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:33.397404  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.47458741s)
	I0913 18:25:33.397437  300876 addons.go:475] Verifying addon ingress=true in "addons-365496"
	I0913 18:25:33.402276  300876 out.go:177] * Verifying ingress addon...
	I0913 18:25:33.405324  300876 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 18:25:33.409221  300876 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 18:25:33.409248  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:33.941002  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:34.445513  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:34.905091  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:34.948924  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:35.043280  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.991437918s)
	I0913 18:25:35.043346  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.982005314s)
	I0913 18:25:35.043580  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.386914895s)
	I0913 18:25:35.043601  300876 addons.go:475] Verifying addon metrics-server=true in "addons-365496"
	I0913 18:25:35.043639  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.221010994s)
	I0913 18:25:35.043772  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.15252447s)
	I0913 18:25:35.043787  300876 addons.go:475] Verifying addon registry=true in "addons-365496"
	I0913 18:25:35.043942  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.668316652s)
	W0913 18:25:35.043976  300876 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:25:35.043994  300876 retry.go:31] will retry after 160.071968ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:25:35.044070  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.927027208s)
	I0913 18:25:35.046771  300876 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-365496 service yakd-dashboard -n yakd-dashboard
	
	I0913 18:25:35.046907  300876 out.go:177] * Verifying registry addon...
	I0913 18:25:35.049655  300876 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 18:25:35.111742  300876 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 18:25:35.111775  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:35.204420  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:25:35.416997  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:35.556086  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:35.710837  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.827152537s)
	I0913 18:25:35.710878  300876 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-365496"
	I0913 18:25:35.711045  300876 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.092793993s)
	I0913 18:25:35.713064  300876 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 18:25:35.713119  300876 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 18:25:35.716634  300876 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 18:25:35.719136  300876 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:25:35.721087  300876 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 18:25:35.721117  300876 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 18:25:35.744285  300876 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 18:25:35.744363  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:35.859413  300876 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 18:25:35.859443  300876 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 18:25:35.910493  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:35.934281  300876 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:25:35.934302  300876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 18:25:35.982734  300876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:25:36.054424  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:36.233428  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:36.410086  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:36.553390  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:36.721486  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:36.914070  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:37.066359  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:37.095536  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.891057294s)
	I0913 18:25:37.095699  300876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.112892829s)
	I0913 18:25:37.099834  300876 addons.go:475] Verifying addon gcp-auth=true in "addons-365496"
	I0913 18:25:37.103207  300876 out.go:177] * Verifying gcp-auth addon...
	I0913 18:25:37.106539  300876 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 18:25:37.163105  300876 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:25:37.221629  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:37.358390  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:37.409734  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:37.553954  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:37.722848  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:37.909800  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:38.057095  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:38.257988  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:38.411180  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:38.554382  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:38.723104  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:38.914134  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:39.054484  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:39.223273  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:39.361344  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:39.410016  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:39.555498  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:39.722583  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:39.914282  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:40.054960  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:40.222070  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:40.410103  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:40.554252  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:40.722547  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:40.910357  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:41.054890  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:41.221854  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:41.410100  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:41.554940  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:41.721316  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:41.859334  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:41.910227  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:42.053942  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:42.224774  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:42.409780  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:42.553311  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:42.721439  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:42.915463  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:43.054490  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:43.222843  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:43.409823  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:43.553288  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:43.721338  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:43.910372  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:44.054571  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:44.257002  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:44.358453  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:44.412275  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:44.553863  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:44.721244  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:44.909645  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:45.058311  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:45.225012  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:45.410893  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:45.553594  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:45.721800  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:45.909953  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:46.053808  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:46.220938  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:46.358811  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:46.409404  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:46.553676  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:46.721889  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:46.909942  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:47.053510  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:47.222624  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:47.410538  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:47.553367  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:47.721473  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:47.910070  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:48.054310  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:48.222204  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:48.410000  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:48.553565  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:48.721927  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:48.858131  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:48.910032  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:49.053594  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:49.221930  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:49.410023  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:49.554115  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:49.721458  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:49.910124  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:50.054727  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:50.221586  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:50.410211  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:50.554289  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:50.722607  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:50.910061  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:51.054039  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:51.221948  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:51.358047  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:51.410260  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:51.553936  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:51.721089  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:51.909948  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:52.053967  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:52.221051  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:52.409617  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:52.553305  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:52.722316  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:52.910333  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:53.054064  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:53.221307  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:53.358789  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:53.409583  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:53.554269  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:53.721700  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:53.910865  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:54.053838  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:54.221348  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:54.410767  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:54.553119  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:54.732150  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:54.910160  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:55.053961  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:55.221836  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:55.410354  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:55.554001  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:55.720898  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:55.867748  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:55.910240  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:56.054724  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:56.221266  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:56.409724  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:56.553110  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:56.721464  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:56.909672  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:57.053307  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:57.221797  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:57.409429  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:57.554156  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:57.720797  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:57.909828  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:58.053468  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:58.221591  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:58.357954  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:25:58.409539  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:58.553528  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:58.756633  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:58.911149  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:59.053678  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:59.221512  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:59.409611  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:25:59.553125  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:25:59.721622  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:25:59.910359  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:00.105872  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:00.247650  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:00.361677  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:26:00.411644  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:00.554155  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:00.721992  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:00.910341  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:01.053802  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:01.222426  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:01.410167  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:01.554127  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:01.721601  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:01.910705  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:02.053333  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:02.221689  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:02.410708  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:02.553227  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:02.724147  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:02.858809  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:26:02.910125  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:03.054356  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:03.222726  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:03.410403  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:03.555194  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:03.721971  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:03.910481  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:04.053540  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:04.222234  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:04.410290  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:04.553675  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:04.721812  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:04.859447  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:26:04.910126  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:05.054416  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:05.221384  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:05.410441  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:05.554036  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:05.721724  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:05.909851  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:06.053665  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:06.221042  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:06.409752  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:06.553906  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:06.721509  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:06.911112  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:07.054041  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:07.221684  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:07.358821  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:26:07.409629  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:07.553238  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:07.721435  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:07.910412  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:08.054722  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:08.221232  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:08.409843  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:08.553575  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:08.731289  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:08.911963  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:09.054020  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:09.221665  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:09.409194  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:09.554207  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:09.721855  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:09.858473  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:26:09.909897  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:10.063193  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:10.222099  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:10.411094  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:10.554246  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:10.721797  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:10.912049  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:11.053930  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:11.222636  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:11.410757  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:11.553826  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:11.721874  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:11.861247  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:26:11.910089  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:12.062246  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:12.222422  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:12.410075  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:12.555922  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:12.722096  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:12.911358  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:13.054572  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:26:13.223339  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:13.411679  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:13.554429  300876 kapi.go:107] duration metric: took 38.504771779s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 18:26:13.722156  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:13.910867  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:14.226410  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:14.360561  300876 pod_ready.go:103] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"False"
	I0913 18:26:14.409927  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:14.721601  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:14.910401  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:15.232813  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:15.359527  300876 pod_ready.go:93] pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace has status "Ready":"True"
	I0913 18:26:15.359613  300876 pod_ready.go:82] duration metric: took 47.007497993s for pod "coredns-7c65d6cfc9-qx7rg" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.359641  300876 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-365496" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.366588  300876 pod_ready.go:93] pod "etcd-addons-365496" in "kube-system" namespace has status "Ready":"True"
	I0913 18:26:15.366614  300876 pod_ready.go:82] duration metric: took 6.95187ms for pod "etcd-addons-365496" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.366631  300876 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-365496" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.372690  300876 pod_ready.go:93] pod "kube-apiserver-addons-365496" in "kube-system" namespace has status "Ready":"True"
	I0913 18:26:15.372722  300876 pod_ready.go:82] duration metric: took 6.083531ms for pod "kube-apiserver-addons-365496" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.372734  300876 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-365496" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.378990  300876 pod_ready.go:93] pod "kube-controller-manager-addons-365496" in "kube-system" namespace has status "Ready":"True"
	I0913 18:26:15.379015  300876 pod_ready.go:82] duration metric: took 6.273528ms for pod "kube-controller-manager-addons-365496" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.379027  300876 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tkzx8" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.385358  300876 pod_ready.go:93] pod "kube-proxy-tkzx8" in "kube-system" namespace has status "Ready":"True"
	I0913 18:26:15.385379  300876 pod_ready.go:82] duration metric: took 6.344683ms for pod "kube-proxy-tkzx8" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.385389  300876 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-365496" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.410925  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:15.721872  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:15.757014  300876 pod_ready.go:93] pod "kube-scheduler-addons-365496" in "kube-system" namespace has status "Ready":"True"
	I0913 18:26:15.757085  300876 pod_ready.go:82] duration metric: took 371.68744ms for pod "kube-scheduler-addons-365496" in "kube-system" namespace to be "Ready" ...
	I0913 18:26:15.757109  300876 pod_ready.go:39] duration metric: took 48.421911612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:26:15.757158  300876 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:26:15.757257  300876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:26:15.774344  300876 api_server.go:72] duration metric: took 51.165288111s to wait for apiserver process to appear ...
	I0913 18:26:15.774369  300876 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:26:15.774394  300876 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0913 18:26:15.782534  300876 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0913 18:26:15.783759  300876 api_server.go:141] control plane version: v1.31.1
	I0913 18:26:15.783794  300876 api_server.go:131] duration metric: took 9.416893ms to wait for apiserver health ...
	I0913 18:26:15.783803  300876 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:26:15.913391  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:16.022097  300876 system_pods.go:59] 18 kube-system pods found
	I0913 18:26:16.022144  300876 system_pods.go:61] "coredns-7c65d6cfc9-qx7rg" [5223fe9b-4303-4cba-a2b9-ace271e5c575] Running
	I0913 18:26:16.022158  300876 system_pods.go:61] "csi-hostpath-attacher-0" [10d6994d-9486-4fe0-90c5-7952672a9e3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:26:16.022166  300876 system_pods.go:61] "csi-hostpath-resizer-0" [20688c19-42ff-4221-b998-9149f2b9ee45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:26:16.022176  300876 system_pods.go:61] "csi-hostpathplugin-hlqgv" [6d2b25c5-d569-481f-9909-2c3915ed6585] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:26:16.022182  300876 system_pods.go:61] "etcd-addons-365496" [ccfd253c-cbac-48de-8a64-666ceb2b3994] Running
	I0913 18:26:16.022187  300876 system_pods.go:61] "kindnet-dp2m9" [f9107fae-ceb7-4c76-bd94-8fb1a105e4ce] Running
	I0913 18:26:16.022191  300876 system_pods.go:61] "kube-apiserver-addons-365496" [cf3eed5e-836d-480a-803e-03d31eeced05] Running
	I0913 18:26:16.022196  300876 system_pods.go:61] "kube-controller-manager-addons-365496" [91723736-aaae-4109-b1b3-8c0805d21a12] Running
	I0913 18:26:16.022206  300876 system_pods.go:61] "kube-ingress-dns-minikube" [d825beaa-336d-4293-af8a-9c3972bdfdea] Running
	I0913 18:26:16.022211  300876 system_pods.go:61] "kube-proxy-tkzx8" [e7c0f8ca-a1ed-451f-86de-aca16fec7dfa] Running
	I0913 18:26:16.022222  300876 system_pods.go:61] "kube-scheduler-addons-365496" [ce3e3740-f13a-4f83-a0d1-fbbd010c9895] Running
	I0913 18:26:16.022229  300876 system_pods.go:61] "metrics-server-84c5f94fbc-zw9g7" [8c940fcd-9cef-4740-9433-0b9ccb893566] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:26:16.022237  300876 system_pods.go:61] "nvidia-device-plugin-daemonset-8p94d" [0dce0d3d-b978-40ef-8ed8-f936aece4e07] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 18:26:16.022245  300876 system_pods.go:61] "registry-66c9cd494c-97mff" [74f2b527-02d3-446a-b0e7-cb8eab4b50e9] Running
	I0913 18:26:16.022250  300876 system_pods.go:61] "registry-proxy-wlc96" [75b88ef1-69ac-4b5a-bd2a-9dbaede32979] Running
	I0913 18:26:16.022256  300876 system_pods.go:61] "snapshot-controller-56fcc65765-r65bz" [5554025b-35c5-4f9e-bb25-493fbee15f74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:26:16.022263  300876 system_pods.go:61] "snapshot-controller-56fcc65765-zqcnd" [79f78186-f2a5-473f-b8f1-35de2439cc99] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:26:16.022267  300876 system_pods.go:61] "storage-provisioner" [1fb4d923-e342-40ff-a481-83b0082c30db] Running
	I0913 18:26:16.022275  300876 system_pods.go:74] duration metric: took 238.465775ms to wait for pod list to return data ...
	I0913 18:26:16.022285  300876 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:26:16.155668  300876 default_sa.go:45] found service account: "default"
	I0913 18:26:16.155701  300876 default_sa.go:55] duration metric: took 133.408882ms for default service account to be created ...
	I0913 18:26:16.155713  300876 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:26:16.221079  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:16.362473  300876 system_pods.go:86] 18 kube-system pods found
	I0913 18:26:16.362513  300876 system_pods.go:89] "coredns-7c65d6cfc9-qx7rg" [5223fe9b-4303-4cba-a2b9-ace271e5c575] Running
	I0913 18:26:16.362525  300876 system_pods.go:89] "csi-hostpath-attacher-0" [10d6994d-9486-4fe0-90c5-7952672a9e3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:26:16.362533  300876 system_pods.go:89] "csi-hostpath-resizer-0" [20688c19-42ff-4221-b998-9149f2b9ee45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:26:16.362541  300876 system_pods.go:89] "csi-hostpathplugin-hlqgv" [6d2b25c5-d569-481f-9909-2c3915ed6585] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:26:16.362546  300876 system_pods.go:89] "etcd-addons-365496" [ccfd253c-cbac-48de-8a64-666ceb2b3994] Running
	I0913 18:26:16.362552  300876 system_pods.go:89] "kindnet-dp2m9" [f9107fae-ceb7-4c76-bd94-8fb1a105e4ce] Running
	I0913 18:26:16.362563  300876 system_pods.go:89] "kube-apiserver-addons-365496" [cf3eed5e-836d-480a-803e-03d31eeced05] Running
	I0913 18:26:16.362568  300876 system_pods.go:89] "kube-controller-manager-addons-365496" [91723736-aaae-4109-b1b3-8c0805d21a12] Running
	I0913 18:26:16.362577  300876 system_pods.go:89] "kube-ingress-dns-minikube" [d825beaa-336d-4293-af8a-9c3972bdfdea] Running
	I0913 18:26:16.362581  300876 system_pods.go:89] "kube-proxy-tkzx8" [e7c0f8ca-a1ed-451f-86de-aca16fec7dfa] Running
	I0913 18:26:16.362585  300876 system_pods.go:89] "kube-scheduler-addons-365496" [ce3e3740-f13a-4f83-a0d1-fbbd010c9895] Running
	I0913 18:26:16.362592  300876 system_pods.go:89] "metrics-server-84c5f94fbc-zw9g7" [8c940fcd-9cef-4740-9433-0b9ccb893566] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:26:16.362603  300876 system_pods.go:89] "nvidia-device-plugin-daemonset-8p94d" [0dce0d3d-b978-40ef-8ed8-f936aece4e07] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 18:26:16.362608  300876 system_pods.go:89] "registry-66c9cd494c-97mff" [74f2b527-02d3-446a-b0e7-cb8eab4b50e9] Running
	I0913 18:26:16.362615  300876 system_pods.go:89] "registry-proxy-wlc96" [75b88ef1-69ac-4b5a-bd2a-9dbaede32979] Running
	I0913 18:26:16.362622  300876 system_pods.go:89] "snapshot-controller-56fcc65765-r65bz" [5554025b-35c5-4f9e-bb25-493fbee15f74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:26:16.362629  300876 system_pods.go:89] "snapshot-controller-56fcc65765-zqcnd" [79f78186-f2a5-473f-b8f1-35de2439cc99] Running
	I0913 18:26:16.362634  300876 system_pods.go:89] "storage-provisioner" [1fb4d923-e342-40ff-a481-83b0082c30db] Running
	I0913 18:26:16.362641  300876 system_pods.go:126] duration metric: took 206.922488ms to wait for k8s-apps to be running ...
	I0913 18:26:16.362649  300876 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:26:16.362712  300876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:26:16.377362  300876 system_svc.go:56] duration metric: took 14.70387ms WaitForService to wait for kubelet
	I0913 18:26:16.377393  300876 kubeadm.go:582] duration metric: took 51.76834727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:26:16.377440  300876 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:26:16.409410  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:16.556838  300876 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0913 18:26:16.556870  300876 node_conditions.go:123] node cpu capacity is 2
	I0913 18:26:16.556883  300876 node_conditions.go:105] duration metric: took 179.433007ms to run NodePressure ...
	I0913 18:26:16.556896  300876 start.go:241] waiting for startup goroutines ...
	I0913 18:26:16.722022  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:16.910996  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:17.221424  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:17.410049  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:17.721988  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:17.911789  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:18.222953  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:18.410390  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:18.722302  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:18.910733  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:19.224348  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:19.410300  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:19.721603  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:19.910133  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:20.223609  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:20.409652  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:20.722638  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:20.910596  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:21.222456  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:21.410161  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:21.721977  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:21.910011  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:22.239355  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:22.409769  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:22.720994  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:22.911208  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:23.224607  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:23.410053  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:23.721458  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:23.909643  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:24.221699  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:24.409750  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:24.723930  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:24.910069  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:25.224503  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:25.410402  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:25.721126  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:25.909143  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:26.221738  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:26.410186  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:26.720983  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:26.911407  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:27.229440  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:27.413040  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:27.724673  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:27.910959  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:28.225746  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:28.410308  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:28.721422  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:28.912150  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:29.222299  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:29.410644  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:29.724548  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:29.912756  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:30.222881  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:30.411956  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:30.722149  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:30.910200  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:31.265604  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:31.410209  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:31.733551  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:31.912012  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:32.222267  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:32.409564  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:32.722283  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:32.909938  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:33.222477  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:33.410208  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:33.721628  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:33.910255  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:34.222097  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:34.410959  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:34.721647  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:34.911497  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:35.221790  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:35.410423  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:35.721420  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:35.910784  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:36.222319  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:36.410769  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:36.721800  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:36.910498  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:37.222166  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:37.411240  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:37.722354  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:37.910647  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:38.222028  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:38.410494  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:38.721388  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:38.910015  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:39.225093  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:39.412574  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:39.721015  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:39.910285  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:40.222140  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:40.409253  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:40.722457  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:40.910367  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:41.223554  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:41.410452  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:41.721339  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:41.909193  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:42.223624  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:42.410914  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:42.721076  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:42.911597  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:43.222188  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:43.410487  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:43.722234  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:43.910617  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:44.224428  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:44.410771  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:44.737642  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:44.909323  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:45.224408  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:45.411050  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:45.725531  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:45.909997  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:46.222606  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:46.410223  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:46.721584  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:46.909752  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:47.256492  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:47.410896  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:47.721210  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:47.910541  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:48.222874  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:48.411054  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:48.722855  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:48.909877  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:49.222682  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:49.410452  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:49.722157  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:49.912101  300876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:26:50.221681  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:50.420522  300876 kapi.go:107] duration metric: took 1m17.015193189s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 18:26:50.721458  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:51.222609  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:51.722087  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:52.222954  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:52.721734  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:53.222166  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:53.722637  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:54.221288  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:54.721501  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:55.221383  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:55.722338  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:56.221741  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:26:56.722089  300876 kapi.go:107] duration metric: took 1m21.005452296s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 18:27:00.610838  300876 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:27:00.610868  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:01.110588  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:01.610576  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:02.110075  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:02.610071  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:03.110199  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:03.609984  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:04.110029  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:04.609925  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:05.110277  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:05.610379  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:06.110193  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:06.610763  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:07.110162  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:07.610272  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:08.112636  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:08.610779  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:09.111153  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:09.610899  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:10.111455  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:10.610459  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:11.110066  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:11.611442  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:12.111257  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:12.609539  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:13.110256  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:13.610038  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:14.110303  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:14.609753  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:15.110263  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:15.610415  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:16.109953  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:16.610835  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:17.110919  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:17.609873  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:18.110558  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:18.610335  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:19.110855  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:19.610406  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:20.110298  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:20.610214  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:21.109803  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:21.611264  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:22.114368  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:22.616555  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:23.110634  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:23.610596  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:24.110627  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:24.610016  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:25.110085  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:25.609964  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:26.110528  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:26.610570  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:27.110928  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:27.610983  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:28.110786  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:28.611607  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:29.110701  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:29.610674  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:30.110930  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:30.610430  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:31.110053  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:31.610643  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:32.111193  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:32.609685  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:33.113008  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:33.611147  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:34.110536  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:34.610227  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:35.110202  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:35.609822  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:36.110579  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:36.610375  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:37.110146  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:37.610793  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:38.110755  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:38.610146  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:39.111014  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:39.610612  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:40.109978  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:40.610633  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:41.111001  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:41.610615  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:42.113140  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:42.610700  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:43.110365  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:43.610585  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:44.110379  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:44.610013  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:45.111122  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:45.609920  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:46.110159  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:46.610160  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:47.110021  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:47.610477  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:48.111470  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:48.610290  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:49.110718  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:49.611342  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:50.110582  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:50.610263  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:51.111038  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:51.610241  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:52.110914  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:52.610815  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:53.109664  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:53.610643  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:54.109931  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:54.610309  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:55.110823  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:55.610314  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:56.110415  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:56.609921  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:57.110158  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:57.610971  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:58.110268  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:58.610610  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:59.110305  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:27:59.609919  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:00.120664  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:00.610657  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:01.110496  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:01.610693  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:02.111117  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:02.609816  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:03.111186  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:03.611024  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:04.110170  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:04.611619  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:05.110678  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:05.611333  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:06.110201  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:06.610905  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:07.110908  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:07.615245  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:08.110641  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:08.610844  300876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:28:09.111274  300876 kapi.go:107] duration metric: took 2m32.004732533s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 18:28:09.113140  300876 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-365496 cluster.
	I0913 18:28:09.114791  300876 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 18:28:09.116463  300876 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 18:28:09.118348  300876 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, volcano, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0913 18:28:09.120167  300876 addons.go:510] duration metric: took 2m44.510879964s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin storage-provisioner-rancher volcano metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0913 18:28:09.120225  300876 start.go:246] waiting for cluster config update ...
	I0913 18:28:09.120250  300876 start.go:255] writing updated cluster config ...
	I0913 18:28:09.120566  300876 ssh_runner.go:195] Run: rm -f paused
	I0913 18:28:09.472688  300876 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 18:28:09.474796  300876 out.go:177] * Done! kubectl is now configured to use "addons-365496" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	7f08f6ac7b60d       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   d57e222c5d9f2       gadget-pdh2j
	3156af38d2ccc       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   654c2a0e698a3       gcp-auth-89d5ffd79-l6kks
	4e4dc786b2c02       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   1542ede3c88f3       csi-hostpathplugin-hlqgv
	34c02eac4f720       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   1542ede3c88f3       csi-hostpathplugin-hlqgv
	5245f996c1bdb       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   1542ede3c88f3       csi-hostpathplugin-hlqgv
	6781d11d2fcdf       08f6b2990811a       4 minutes ago       Running             hostpath                                 0                   1542ede3c88f3       csi-hostpathplugin-hlqgv
	85db0a6780082       8b46b1cd48760       4 minutes ago       Running             admission                                0                   7d1107d53345f       volcano-admission-77d7d48b68-6qlln
	0d03cdcd8b0f9       289a818c8d9c5       4 minutes ago       Running             controller                               0                   96d1f3ba51bc2       ingress-nginx-controller-bc57996ff-549v2
	066eb503b7703       0107d56dbc0be       4 minutes ago       Running             node-driver-registrar                    0                   1542ede3c88f3       csi-hostpathplugin-hlqgv
	9fbe2bf968f8f       a9bac31a5be8d       4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   8cfe30d8031d5       nvidia-device-plugin-daemonset-8p94d
	669aa78201de5       8be4bcf8ec607       4 minutes ago       Running             cloud-spanner-emulator                   0                   302eb37d8b7e6       cloud-spanner-emulator-769b77f747-p8cr7
	e3516fefc1b53       420193b27261a       4 minutes ago       Exited              patch                                    2                   4f46f86d082af       ingress-nginx-admission-patch-mvhxj
	44a3eb096e1d8       1505f556b3a7b       4 minutes ago       Running             volcano-controllers                      0                   88b1caa0e98fb       volcano-controllers-56675bb4d5-hzv8d
	2765b3880ccd0       9a80d518f102c       4 minutes ago       Running             csi-attacher                             0                   004a972c383f4       csi-hostpath-attacher-0
	a51630560a089       420193b27261a       4 minutes ago       Exited              create                                   0                   d8668cd7db966       ingress-nginx-admission-create-z9plc
	6162cd15e5079       d9c7ad4c226bf       4 minutes ago       Running             volcano-scheduler                        0                   cbe7a432dea50       volcano-scheduler-576bc46687-mnc29
	85f0ed449d0da       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   33aeb35173fec       metrics-server-84c5f94fbc-zw9g7
	2db7de86dd7cf       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   8334b7f35e24e       snapshot-controller-56fcc65765-r65bz
	8ae11de7a3065       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   1542ede3c88f3       csi-hostpathplugin-hlqgv
	8106555c37f6b       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   f84fea86266b3       csi-hostpath-resizer-0
	ff0159e994472       77bdba588b953       5 minutes ago       Running             yakd                                     0                   8af668f2f2ddf       yakd-dashboard-67d98fc6b-dbp2p
	465012c553b56       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   c66d9fe281158       snapshot-controller-56fcc65765-zqcnd
	e6706d77e177e       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   8da692ccb5eb9       coredns-7c65d6cfc9-qx7rg
	d0e1bbccbe5ad       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   febee4530ff5a       local-path-provisioner-86d989889c-cktm5
	3859ef37da681       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   5819d342735a3       registry-66c9cd494c-97mff
	b59f63d900fb6       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   063dc7da20a6b       registry-proxy-wlc96
	44d420544e61f       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   89758dafa7477       kube-ingress-dns-minikube
	26c263a6b5164       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   c81a459fd86b7       storage-provisioner
	23b7d42007768       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   344191bc2858c       kube-proxy-tkzx8
	af39148dbb88d       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   1469e0e101301       kindnet-dp2m9
	146201e9102c8       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   e2eaf2a874fa3       kube-apiserver-addons-365496
	872204cca9225       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   13a22e7e8daaa       kube-scheduler-addons-365496
	43675c6cf08f3       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   53d73d89639fb       kube-controller-manager-addons-365496
	16ba1a0b25361       27e3830e14027       6 minutes ago       Running             etcd                                     0                   9415187d5ac6c       etcd-addons-365496
	
	
	==> containerd <==
	Sep 13 18:29:04 addons-365496 containerd[810]: time="2024-09-13T18:29:04.940029556Z" level=info msg="CreateContainer within sandbox \"d57e222c5d9f21b79f9c1e71492905318ee45510eec0601d56110779de15a53d\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 13 18:29:04 addons-365496 containerd[810]: time="2024-09-13T18:29:04.964358576Z" level=info msg="CreateContainer within sandbox \"d57e222c5d9f21b79f9c1e71492905318ee45510eec0601d56110779de15a53d\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f\""
	Sep 13 18:29:04 addons-365496 containerd[810]: time="2024-09-13T18:29:04.965187499Z" level=info msg="StartContainer for \"7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f\""
	Sep 13 18:29:05 addons-365496 containerd[810]: time="2024-09-13T18:29:05.027476072Z" level=info msg="StartContainer for \"7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f\" returns successfully"
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.574928964Z" level=error msg="ExecSync for \"7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f\" failed" error="failed to exec in container: failed to create exec \"93b2c1c1f23292857206096122a1a6a1368e2c6e4abfbb92094324b4b623b10a\": cannot exec in a stopped state: unknown"
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.580776435Z" level=error msg="ExecSync for \"7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f\" failed" error="failed to exec in container: failed to create exec \"5dcd0f4f06380b4a45067e4cd70087f65a14d94b6d358b49799faa310ea04dc6\": cannot exec in a stopped state: unknown"
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.582093251Z" level=error msg="ExecSync for \"7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f\" failed" error="failed to exec in container: failed to create exec \"37a58e0718b3a27e123047e46381675aee0275124148b6d0dee693ea21d05792\": cannot exec in a stopped state: unknown"
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.598154633Z" level=error msg="ExecSync for \"7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"be4306506ed899030b680da7d404d6dcea4272c3cb57ec20ba288ca62e8a2b69\": task 7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f not found: not found"
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.598683675Z" level=info msg="shim disconnected" id=7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f namespace=k8s.io
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.600056966Z" level=warning msg="cleaning up after shim disconnected" id=7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f namespace=k8s.io
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.600088679Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.600044166Z" level=error msg="ExecSync for \"7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"a1bcadd62215118394ca6828d880c2f04a3077892d77c81651a0c988685eb1bc\": task 7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f not found: not found"
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.601275632Z" level=error msg="ExecSync for \"7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f not found: not found"
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.798245026Z" level=info msg="RemoveContainer for \"d754ae30d76d3324f3c993058ca83ce12e9475a2d5d958f3350bd9c49d62cc22\""
	Sep 13 18:29:06 addons-365496 containerd[810]: time="2024-09-13T18:29:06.811981666Z" level=info msg="RemoveContainer for \"d754ae30d76d3324f3c993058ca83ce12e9475a2d5d958f3350bd9c49d62cc22\" returns successfully"
	Sep 13 18:29:19 addons-365496 containerd[810]: time="2024-09-13T18:29:19.857852841Z" level=info msg="RemoveContainer for \"a01f4890e4e98fdd9ad38612b11dcd971692fd151cb77762e7c640108d101fdf\""
	Sep 13 18:29:19 addons-365496 containerd[810]: time="2024-09-13T18:29:19.865007668Z" level=info msg="RemoveContainer for \"a01f4890e4e98fdd9ad38612b11dcd971692fd151cb77762e7c640108d101fdf\" returns successfully"
	Sep 13 18:29:19 addons-365496 containerd[810]: time="2024-09-13T18:29:19.867435532Z" level=info msg="StopPodSandbox for \"531a70aa534547ce7196690527efa91e0b06d217622ee3040aff10c94c9a777c\""
	Sep 13 18:29:19 addons-365496 containerd[810]: time="2024-09-13T18:29:19.875387733Z" level=info msg="TearDown network for sandbox \"531a70aa534547ce7196690527efa91e0b06d217622ee3040aff10c94c9a777c\" successfully"
	Sep 13 18:29:19 addons-365496 containerd[810]: time="2024-09-13T18:29:19.875429710Z" level=info msg="StopPodSandbox for \"531a70aa534547ce7196690527efa91e0b06d217622ee3040aff10c94c9a777c\" returns successfully"
	Sep 13 18:29:19 addons-365496 containerd[810]: time="2024-09-13T18:29:19.876059486Z" level=info msg="RemovePodSandbox for \"531a70aa534547ce7196690527efa91e0b06d217622ee3040aff10c94c9a777c\""
	Sep 13 18:29:19 addons-365496 containerd[810]: time="2024-09-13T18:29:19.876185714Z" level=info msg="Forcibly stopping sandbox \"531a70aa534547ce7196690527efa91e0b06d217622ee3040aff10c94c9a777c\""
	Sep 13 18:29:19 addons-365496 containerd[810]: time="2024-09-13T18:29:19.883580574Z" level=info msg="TearDown network for sandbox \"531a70aa534547ce7196690527efa91e0b06d217622ee3040aff10c94c9a777c\" successfully"
	Sep 13 18:29:19 addons-365496 containerd[810]: time="2024-09-13T18:29:19.889541375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"531a70aa534547ce7196690527efa91e0b06d217622ee3040aff10c94c9a777c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 13 18:29:19 addons-365496 containerd[810]: time="2024-09-13T18:29:19.889662786Z" level=info msg="RemovePodSandbox \"531a70aa534547ce7196690527efa91e0b06d217622ee3040aff10c94c9a777c\" returns successfully"
	
	
	==> coredns [e6706d77e177e96d6bbd0e5368265b0cf73dd6d1b9628c1798a1b6ef8274576e] <==
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54975 - 14333 "HINFO IN 8629097169692245054.4506638775227950158. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011087587s
	[INFO] 10.244.0.2:33368 - 12884 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000211815s
	[INFO] 10.244.0.2:33368 - 34899 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173777s
	[INFO] 10.244.0.2:32789 - 7736 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153559s
	[INFO] 10.244.0.2:32789 - 62783 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000115085s
	[INFO] 10.244.0.2:44976 - 35545 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084488s
	[INFO] 10.244.0.2:44976 - 26075 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087204s
	[INFO] 10.244.0.2:53602 - 57840 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133981s
	[INFO] 10.244.0.2:53602 - 55282 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075176s
	[INFO] 10.244.0.2:51977 - 35483 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00495147s
	[INFO] 10.244.0.2:51977 - 5017 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005228475s
	[INFO] 10.244.0.2:33918 - 26652 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071172s
	[INFO] 10.244.0.2:33918 - 35353 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000051659s
	[INFO] 10.244.0.24:43702 - 24500 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000191647s
	[INFO] 10.244.0.24:52968 - 1898 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000229185s
	[INFO] 10.244.0.24:44423 - 60736 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175499s
	[INFO] 10.244.0.24:48881 - 1876 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000163996s
	[INFO] 10.244.0.24:39661 - 44481 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000187323s
	[INFO] 10.244.0.24:53291 - 46378 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124833s
	[INFO] 10.244.0.24:45658 - 57696 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002587149s
	[INFO] 10.244.0.24:53499 - 46221 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002733299s
	[INFO] 10.244.0.24:57244 - 31912 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002120966s
	[INFO] 10.244.0.24:44133 - 6053 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001962526s
	
	
	==> describe nodes <==
	Name:               addons-365496
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-365496
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=addons-365496
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_25_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-365496
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-365496"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-365496
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:31:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:28:23 +0000   Fri, 13 Sep 2024 18:25:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:28:23 +0000   Fri, 13 Sep 2024 18:25:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:28:23 +0000   Fri, 13 Sep 2024 18:25:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:28:23 +0000   Fri, 13 Sep 2024 18:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-365496
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 529e1a0df4834f48992727382bc24d29
	  System UUID:                82273203-7c22-4996-baf1-b7a9dfe1b21b
	  Boot ID:                    31d76137-2e5d-4866-b75b-16f7e69e7ff6
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-p8cr7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  gadget                      gadget-pdh2j                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  gcp-auth                    gcp-auth-89d5ffd79-l6kks                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-549v2    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-qx7rg                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 csi-hostpathplugin-hlqgv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 etcd-addons-365496                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-dp2m9                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m4s
	  kube-system                 kube-apiserver-addons-365496                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-365496       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-proxy-tkzx8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-addons-365496                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-84c5f94fbc-zw9g7             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m58s
	  kube-system                 nvidia-device-plugin-daemonset-8p94d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 registry-66c9cd494c-97mff                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-proxy-wlc96                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 snapshot-controller-56fcc65765-r65bz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-zqcnd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  local-path-storage          local-path-provisioner-86d989889c-cktm5     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  volcano-system              volcano-admission-77d7d48b68-6qlln          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  volcano-system              volcano-controllers-56675bb4d5-hzv8d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-scheduler-576bc46687-mnc29          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-dbp2p              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Warning  CgroupV1                 6m16s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m16s (x8 over 6m16s)  kubelet          Node addons-365496 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m16s (x7 over 6m16s)  kubelet          Node addons-365496 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m16s (x7 over 6m16s)  kubelet          Node addons-365496 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s                   kubelet          Node addons-365496 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-365496 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s                   kubelet          Node addons-365496 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m5s                   node-controller  Node addons-365496 event: Registered Node addons-365496 in Controller
	
	
	==> dmesg <==
	[Sep13 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015201] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.448905] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.746796] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.102084] kauditd_printk_skb: 36 callbacks suppressed
	[Sep13 17:15] hrtimer: interrupt took 6691767 ns
	[Sep13 17:51] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [16ba1a0b25361626e2d273703598fe0718581748fc45df05ec35277b853219cd] <==
	{"level":"info","ts":"2024-09-13T18:25:13.123574Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T18:25:13.511893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-13T18:25:13.512139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-13T18:25:13.512328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-13T18:25:13.512422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-13T18:25:13.512516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-13T18:25:13.512602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-13T18:25:13.512690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-13T18:25:13.516041Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-365496 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T18:25:13.516236Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:25:13.517653Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:25:13.517935Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:25:13.519350Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:25:13.519568Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:25:13.519723Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:25:13.518873Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:25:13.531914Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T18:25:13.532134Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T18:25:13.533457Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:25:13.534481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-13T18:25:13.544402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T18:26:31.072619Z","caller":"traceutil/trace.go:171","msg":"trace[614794107] linearizableReadLoop","detail":"{readStateIndex:1269; appliedIndex:1267; }","duration":"103.127736ms","start":"2024-09-13T18:26:30.969473Z","end":"2024-09-13T18:26:31.072601Z","steps":["trace[614794107] 'read index received'  (duration: 43.26509ms)","trace[614794107] 'applied index is now lower than readState.Index'  (duration: 59.861998ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T18:26:31.072726Z","caller":"traceutil/trace.go:171","msg":"trace[778269580] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"103.308789ms","start":"2024-09-13T18:26:30.969409Z","end":"2024-09-13T18:26:31.072718Z","steps":["trace[778269580] 'process raft request'  (duration: 53.301855ms)","trace[778269580] 'compare'  (duration: 49.721224ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T18:26:31.073276Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.770541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-13T18:26:31.073314Z","caller":"traceutil/trace.go:171","msg":"trace[786434522] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1239; }","duration":"103.835641ms","start":"2024-09-13T18:26:30.969470Z","end":"2024-09-13T18:26:31.073305Z","steps":["trace[786434522] 'agreement among raft nodes before linearized reading'  (duration: 103.436839ms)"],"step_count":1}
	
	
	==> gcp-auth [3156af38d2ccce22d7ad745d7d86ae8e81ccc5c06eae4f426578b946f64df0de] <==
	2024/09/13 18:28:08 GCP Auth Webhook started!
	2024/09/13 18:28:25 Ready to marshal response ...
	2024/09/13 18:28:25 Ready to write response ...
	2024/09/13 18:28:26 Ready to marshal response ...
	2024/09/13 18:28:26 Ready to write response ...
	
	
	==> kernel <==
	 18:31:28 up  2:13,  0 users,  load average: 0.20, 1.27, 2.18
	Linux addons-365496 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [af39148dbb88d11557580b160bae6801ea0f769594be14ca8d8b18f4c39d1c4a] <==
	I0913 18:29:26.516924       1 main.go:299] handling current node
	I0913 18:29:36.516177       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:29:36.516214       1 main.go:299] handling current node
	I0913 18:29:46.516063       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:29:46.516237       1 main.go:299] handling current node
	I0913 18:29:56.516715       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:29:56.516750       1 main.go:299] handling current node
	I0913 18:30:06.516511       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:30:06.516544       1 main.go:299] handling current node
	I0913 18:30:16.516379       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:30:16.516414       1 main.go:299] handling current node
	I0913 18:30:26.517078       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:30:26.517110       1 main.go:299] handling current node
	I0913 18:30:36.517003       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:30:36.517042       1 main.go:299] handling current node
	I0913 18:30:46.516813       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:30:46.516849       1 main.go:299] handling current node
	I0913 18:30:56.516148       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:30:56.516185       1 main.go:299] handling current node
	I0913 18:31:06.516815       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:31:06.516860       1 main.go:299] handling current node
	I0913 18:31:16.516419       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:31:16.516449       1 main.go:299] handling current node
	I0913 18:31:26.517081       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0913 18:31:26.517378       1 main.go:299] handling current node
	
	
	==> kube-apiserver [146201e9102c8efb75c45cb6f138adae71c174216e85c3a75139a223c6f7ddc3] <==
	W0913 18:26:40.105091       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:41.074494       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:42.083066       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:43.172998       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:44.209593       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:45.262941       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:46.353973       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:47.462767       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:48.559290       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:49.606986       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:50.614114       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:51.652042       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:52.733852       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:53.829760       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:54.865743       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:55.925511       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:26:56.939415       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.28.231:443: connect: connection refused
	W0913 18:27:00.043683       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.108.247:443: connect: connection refused
	E0913 18:27:00.043728       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.108.247:443: connect: connection refused" logger="UnhandledError"
	W0913 18:27:40.058001       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.108.247:443: connect: connection refused
	E0913 18:27:40.058049       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.108.247:443: connect: connection refused" logger="UnhandledError"
	W0913 18:27:40.111840       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.108.247:443: connect: connection refused
	E0913 18:27:40.111962       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.108.247:443: connect: connection refused" logger="UnhandledError"
	I0913 18:28:25.920067       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0913 18:28:25.959003       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [43675c6cf08f391dbc368df750bc1b945efb55faed0ca0c909710abb89da806a] <==
	I0913 18:27:40.078779       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0913 18:27:40.088862       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0913 18:27:40.102636       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0913 18:27:40.121858       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0913 18:27:40.132910       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0913 18:27:40.140072       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0913 18:27:40.156063       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0913 18:27:41.552781       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0913 18:27:41.568198       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0913 18:27:42.683176       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0913 18:27:42.723944       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0913 18:27:43.690705       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0913 18:27:43.699734       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0913 18:27:43.710313       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0913 18:27:43.731096       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0913 18:27:43.740986       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0913 18:27:43.748423       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0913 18:28:08.655128       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="9.803037ms"
	I0913 18:28:08.655395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="55.639µs"
	I0913 18:28:13.024996       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0913 18:28:13.030579       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0913 18:28:13.074996       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0913 18:28:13.076378       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0913 18:28:23.629776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-365496"
	I0913 18:28:25.674397       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [23b7d420077687bcf8b4d7c3920a157743bfcbf19119f20542f62749705b0e9b] <==
	I0913 18:25:26.449944       1 server_linux.go:66] "Using iptables proxy"
	I0913 18:25:26.561561       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0913 18:25:26.561623       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:25:26.599288       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0913 18:25:26.599351       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:25:26.601110       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:25:26.601515       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:25:26.601530       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:25:26.612098       1 config.go:199] "Starting service config controller"
	I0913 18:25:26.612129       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:25:26.612155       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:25:26.612160       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:25:26.618225       1 config.go:328] "Starting node config controller"
	I0913 18:25:26.618243       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:25:26.712515       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:25:26.712581       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:25:26.718906       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [872204cca9225c00dfbee0f82b21af29831b0f32ff1722cc9d960564ff2d2ef8] <==
	W0913 18:25:17.349625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:25:17.355810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:17.349998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 18:25:17.355978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:17.356426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 18:25:17.356671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:18.175241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:25:18.175483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:18.203304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 18:25:18.203524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:18.252098       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 18:25:18.252351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:18.280165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0913 18:25:18.280420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:18.346861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 18:25:18.347091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:18.441070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 18:25:18.441334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:18.506902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 18:25:18.507193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:18.516176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 18:25:18.516437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 18:25:18.721135       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:25:18.721413       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0913 18:25:21.933322       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 18:29:19 addons-365496 kubelet[1496]: I0913 18:29:19.856136    1496 scope.go:117] "RemoveContainer" containerID="a01f4890e4e98fdd9ad38612b11dcd971692fd151cb77762e7c640108d101fdf"
	Sep 13 18:29:24 addons-365496 kubelet[1496]: I0913 18:29:24.796767    1496 scope.go:117] "RemoveContainer" containerID="7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f"
	Sep 13 18:29:24 addons-365496 kubelet[1496]: E0913 18:29:24.797511    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-pdh2j_gadget(afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3)\"" pod="gadget/gadget-pdh2j" podUID="afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3"
	Sep 13 18:29:39 addons-365496 kubelet[1496]: I0913 18:29:39.797736    1496 scope.go:117] "RemoveContainer" containerID="7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f"
	Sep 13 18:29:39 addons-365496 kubelet[1496]: E0913 18:29:39.797969    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-pdh2j_gadget(afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3)\"" pod="gadget/gadget-pdh2j" podUID="afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3"
	Sep 13 18:29:50 addons-365496 kubelet[1496]: I0913 18:29:50.797135    1496 scope.go:117] "RemoveContainer" containerID="7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f"
	Sep 13 18:29:50 addons-365496 kubelet[1496]: E0913 18:29:50.797353    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-pdh2j_gadget(afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3)\"" pod="gadget/gadget-pdh2j" podUID="afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3"
	Sep 13 18:29:54 addons-365496 kubelet[1496]: I0913 18:29:54.796879    1496 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wlc96" secret="" err="secret \"gcp-auth\" not found"
	Sep 13 18:30:02 addons-365496 kubelet[1496]: I0913 18:30:02.796299    1496 scope.go:117] "RemoveContainer" containerID="7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f"
	Sep 13 18:30:02 addons-365496 kubelet[1496]: E0913 18:30:02.796512    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-pdh2j_gadget(afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3)\"" pod="gadget/gadget-pdh2j" podUID="afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3"
	Sep 13 18:30:17 addons-365496 kubelet[1496]: I0913 18:30:17.796506    1496 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-97mff" secret="" err="secret \"gcp-auth\" not found"
	Sep 13 18:30:17 addons-365496 kubelet[1496]: I0913 18:30:17.798400    1496 scope.go:117] "RemoveContainer" containerID="7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f"
	Sep 13 18:30:17 addons-365496 kubelet[1496]: E0913 18:30:17.798850    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-pdh2j_gadget(afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3)\"" pod="gadget/gadget-pdh2j" podUID="afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3"
	Sep 13 18:30:20 addons-365496 kubelet[1496]: I0913 18:30:20.796933    1496 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-8p94d" secret="" err="secret \"gcp-auth\" not found"
	Sep 13 18:30:31 addons-365496 kubelet[1496]: I0913 18:30:31.796740    1496 scope.go:117] "RemoveContainer" containerID="7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f"
	Sep 13 18:30:31 addons-365496 kubelet[1496]: E0913 18:30:31.796938    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-pdh2j_gadget(afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3)\"" pod="gadget/gadget-pdh2j" podUID="afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3"
	Sep 13 18:30:43 addons-365496 kubelet[1496]: I0913 18:30:43.798495    1496 scope.go:117] "RemoveContainer" containerID="7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f"
	Sep 13 18:30:43 addons-365496 kubelet[1496]: E0913 18:30:43.798716    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-pdh2j_gadget(afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3)\"" pod="gadget/gadget-pdh2j" podUID="afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3"
	Sep 13 18:30:55 addons-365496 kubelet[1496]: I0913 18:30:55.797152    1496 scope.go:117] "RemoveContainer" containerID="7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f"
	Sep 13 18:30:55 addons-365496 kubelet[1496]: E0913 18:30:55.797364    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-pdh2j_gadget(afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3)\"" pod="gadget/gadget-pdh2j" podUID="afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3"
	Sep 13 18:30:58 addons-365496 kubelet[1496]: I0913 18:30:58.796969    1496 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wlc96" secret="" err="secret \"gcp-auth\" not found"
	Sep 13 18:31:08 addons-365496 kubelet[1496]: I0913 18:31:08.796111    1496 scope.go:117] "RemoveContainer" containerID="7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f"
	Sep 13 18:31:08 addons-365496 kubelet[1496]: E0913 18:31:08.796336    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-pdh2j_gadget(afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3)\"" pod="gadget/gadget-pdh2j" podUID="afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3"
	Sep 13 18:31:22 addons-365496 kubelet[1496]: I0913 18:31:22.796750    1496 scope.go:117] "RemoveContainer" containerID="7f08f6ac7b60dd2e035617b6a69082ccb89d11075caae36b25ace42deca6a51f"
	Sep 13 18:31:22 addons-365496 kubelet[1496]: E0913 18:31:22.796977    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-pdh2j_gadget(afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3)\"" pod="gadget/gadget-pdh2j" podUID="afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3"
	
	
	==> storage-provisioner [26c263a6b51649100c660741f593e29921f080d100a49687ebc79412e4af35ce] <==
	I0913 18:25:30.352515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 18:25:30.376646       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 18:25:30.376697       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 18:25:30.409395       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 18:25:30.409757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-365496_4295536e-644f-4318-a4e4-7e90e0bc510e!
	I0913 18:25:30.410211       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17e1890c-2e1f-483a-b864-533e18556dc2", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-365496_4295536e-644f-4318-a4e4-7e90e0bc510e became leader
	I0913 18:25:30.510484       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-365496_4295536e-644f-4318-a4e4-7e90e0bc510e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-365496 -n addons-365496
helpers_test.go:261: (dbg) Run:  kubectl --context addons-365496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-z9plc ingress-nginx-admission-patch-mvhxj test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-365496 describe pod ingress-nginx-admission-create-z9plc ingress-nginx-admission-patch-mvhxj test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-365496 describe pod ingress-nginx-admission-create-z9plc ingress-nginx-admission-patch-mvhxj test-job-nginx-0: exit status 1 (93.995193ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-z9plc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mvhxj" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-365496 describe pod ingress-nginx-admission-create-z9plc ingress-nginx-admission-patch-mvhxj test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.85s)

                                                
                                    

Test pass (299/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.71
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.35
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.16
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 219.9
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 16.14
34 TestAddons/parallel/Ingress 18.88
35 TestAddons/parallel/InspektorGadget 11.09
36 TestAddons/parallel/MetricsServer 5.87
38 TestAddons/parallel/CSI 37.81
39 TestAddons/parallel/Headlamp 17.41
40 TestAddons/parallel/CloudSpanner 6.79
41 TestAddons/parallel/LocalPath 53.27
42 TestAddons/parallel/NvidiaDevicePlugin 6.66
43 TestAddons/parallel/Yakd 11.91
44 TestAddons/StoppedEnableDisable 12.35
45 TestCertOptions 35.76
46 TestCertExpiration 228.6
48 TestForceSystemdFlag 40.46
49 TestForceSystemdEnv 38.43
50 TestDockerEnvContainerd 47.04
55 TestErrorSpam/setup 31.23
56 TestErrorSpam/start 0.74
57 TestErrorSpam/status 1.02
58 TestErrorSpam/pause 1.8
59 TestErrorSpam/unpause 1.95
60 TestErrorSpam/stop 1.46
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 51.98
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 6.39
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 10.57
72 TestFunctional/serial/CacheCmd/cache/add_local 1.29
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.99
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.16
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 49.63
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.71
83 TestFunctional/serial/LogsFileCmd 1.71
84 TestFunctional/serial/InvalidService 4.53
86 TestFunctional/parallel/ConfigCmd 0.45
87 TestFunctional/parallel/DashboardCmd 12.88
88 TestFunctional/parallel/DryRun 0.43
89 TestFunctional/parallel/InternationalLanguage 0.18
90 TestFunctional/parallel/StatusCmd 1.05
94 TestFunctional/parallel/ServiceCmdConnect 11.63
95 TestFunctional/parallel/AddonsCmd 0.24
96 TestFunctional/parallel/PersistentVolumeClaim 24.26
98 TestFunctional/parallel/SSHCmd 0.69
99 TestFunctional/parallel/CpCmd 2.05
101 TestFunctional/parallel/FileSync 0.35
102 TestFunctional/parallel/CertSync 2.15
106 TestFunctional/parallel/NodeLabels 0.1
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
110 TestFunctional/parallel/License 0.33
111 TestFunctional/parallel/Version/short 0.07
112 TestFunctional/parallel/Version/components 1.16
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
117 TestFunctional/parallel/ImageCommands/ImageBuild 3.69
118 TestFunctional/parallel/ImageCommands/Setup 0.77
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.39
124 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.8
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.39
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.17
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
140 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
142 TestFunctional/parallel/ProfileCmd/profile_list 0.4
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
144 TestFunctional/parallel/MountCmd/any-port 8.34
145 TestFunctional/parallel/ServiceCmd/List 0.61
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
148 TestFunctional/parallel/ServiceCmd/Format 0.52
149 TestFunctional/parallel/ServiceCmd/URL 0.42
150 TestFunctional/parallel/MountCmd/specific-port 1.89
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 123.18
159 TestMultiControlPlane/serial/DeployApp 37.98
160 TestMultiControlPlane/serial/PingHostFromPods 1.64
161 TestMultiControlPlane/serial/AddWorkerNode 24.46
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.79
164 TestMultiControlPlane/serial/CopyFile 19.87
165 TestMultiControlPlane/serial/StopSecondaryNode 12.94
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.59
167 TestMultiControlPlane/serial/RestartSecondaryNode 23.49
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 150.49
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.71
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.59
172 TestMultiControlPlane/serial/StopCluster 36
173 TestMultiControlPlane/serial/RestartCluster 75.88
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.59
175 TestMultiControlPlane/serial/AddSecondaryNode 40.58
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
180 TestJSONOutput/start/Command 52.51
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.75
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.67
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.75
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.22
205 TestKicCustomNetwork/create_custom_network 40.02
206 TestKicCustomNetwork/use_default_bridge_network 35.42
207 TestKicExistingNetwork 32.11
208 TestKicCustomSubnet 33.75
209 TestKicStaticIP 36.65
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 69.99
214 TestMountStart/serial/StartWithMountFirst 7.06
215 TestMountStart/serial/VerifyMountFirst 0.26
216 TestMountStart/serial/StartWithMountSecond 6.91
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.62
219 TestMountStart/serial/VerifyMountPostDelete 0.28
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 7.66
222 TestMountStart/serial/VerifyMountPostStop 0.27
225 TestMultiNode/serial/FreshStart2Nodes 65.75
226 TestMultiNode/serial/DeployApp2Nodes 15.57
227 TestMultiNode/serial/PingHostFrom2Pods 1.03
228 TestMultiNode/serial/AddNode 16.25
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.33
231 TestMultiNode/serial/CopyFile 10.26
232 TestMultiNode/serial/StopNode 2.31
233 TestMultiNode/serial/StartAfterStop 9.69
234 TestMultiNode/serial/RestartKeepsNodes 94.64
235 TestMultiNode/serial/DeleteNode 5.56
236 TestMultiNode/serial/StopMultiNode 24.17
237 TestMultiNode/serial/RestartMultiNode 48.15
238 TestMultiNode/serial/ValidateNameConflict 31.27
243 TestPreload 122.04
245 TestScheduledStopUnix 105.96
248 TestInsufficientStorage 11.25
249 TestRunningBinaryUpgrade 82.49
251 TestKubernetesUpgrade 349.05
252 TestMissingContainerUpgrade 164.98
254 TestPause/serial/Start 92.27
255 TestPause/serial/SecondStartNoReconfiguration 6.34
256 TestPause/serial/Pause 0.76
257 TestPause/serial/VerifyStatus 0.33
258 TestPause/serial/Unpause 0.7
259 TestPause/serial/PauseAgain 0.83
260 TestPause/serial/DeletePaused 2.57
261 TestPause/serial/VerifyDeletedResources 0.15
262 TestStoppedBinaryUpgrade/Setup 0.84
263 TestStoppedBinaryUpgrade/Upgrade 107.09
264 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
274 TestNoKubernetes/serial/StartWithK8s 34.53
275 TestNoKubernetes/serial/StartWithStopK8s 19.42
276 TestNoKubernetes/serial/Start 6.75
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
285 TestNetworkPlugins/group/false 4.54
286 TestNoKubernetes/serial/ProfileList 0.76
287 TestNoKubernetes/serial/Stop 1.26
288 TestNoKubernetes/serial/StartNoArgs 8.21
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
294 TestStartStop/group/old-k8s-version/serial/FirstStart 163.61
296 TestStartStop/group/no-preload/serial/FirstStart 70.49
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.72
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.79
299 TestStartStop/group/old-k8s-version/serial/Stop 12.41
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
301 TestStartStop/group/old-k8s-version/serial/SecondStart 378.03
302 TestStartStop/group/no-preload/serial/DeployApp 10.42
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.43
304 TestStartStop/group/no-preload/serial/Stop 12.96
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
306 TestStartStop/group/no-preload/serial/SecondStart 277.9
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
310 TestStartStop/group/no-preload/serial/Pause 3.2
312 TestStartStop/group/embed-certs/serial/FirstStart 93.59
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.22
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
316 TestStartStop/group/old-k8s-version/serial/Pause 3.78
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.91
319 TestStartStop/group/embed-certs/serial/DeployApp 9.34
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.45
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
322 TestStartStop/group/embed-certs/serial/Stop 12.24
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.45
324 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/embed-certs/serial/SecondStart 292.84
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 273.21
329 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
331 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
332 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.2
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
335 TestStartStop/group/newest-cni/serial/FirstStart 42.06
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
338 TestStartStop/group/embed-certs/serial/Pause 4.23
339 TestNetworkPlugins/group/auto/Start 84.72
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.35
342 TestStartStop/group/newest-cni/serial/Stop 1.3
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
344 TestStartStop/group/newest-cni/serial/SecondStart 20.77
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
348 TestStartStop/group/newest-cni/serial/Pause 3.03
349 TestNetworkPlugins/group/kindnet/Start 90.66
350 TestNetworkPlugins/group/auto/KubeletFlags 0.43
351 TestNetworkPlugins/group/auto/NetCatPod 11.53
352 TestNetworkPlugins/group/auto/DNS 0.21
353 TestNetworkPlugins/group/auto/Localhost 0.15
354 TestNetworkPlugins/group/auto/HairPin 0.16
355 TestNetworkPlugins/group/calico/Start 68.78
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
359 TestNetworkPlugins/group/kindnet/DNS 0.2
360 TestNetworkPlugins/group/kindnet/Localhost 0.25
361 TestNetworkPlugins/group/kindnet/HairPin 0.22
362 TestNetworkPlugins/group/custom-flannel/Start 57.95
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.37
365 TestNetworkPlugins/group/calico/NetCatPod 10.34
366 TestNetworkPlugins/group/calico/DNS 0.31
367 TestNetworkPlugins/group/calico/Localhost 0.27
368 TestNetworkPlugins/group/calico/HairPin 0.18
369 TestNetworkPlugins/group/enable-default-cni/Start 50.95
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
372 TestNetworkPlugins/group/custom-flannel/DNS 0.25
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.28
375 TestNetworkPlugins/group/flannel/Start 55.89
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.52
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.31
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
381 TestNetworkPlugins/group/bridge/Start 69.28
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
384 TestNetworkPlugins/group/flannel/NetCatPod 11.35
385 TestNetworkPlugins/group/flannel/DNS 0.23
386 TestNetworkPlugins/group/flannel/Localhost 0.18
387 TestNetworkPlugins/group/flannel/HairPin 0.19
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 8.26
390 TestNetworkPlugins/group/bridge/DNS 0.16
391 TestNetworkPlugins/group/bridge/Localhost 0.15
392 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (12.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-776826 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-776826 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.713955484s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-776826
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-776826: exit status 85 (79.609863ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-776826 | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC |          |
	|         | -p download-only-776826        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:24:08
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:24:08.233027  300121 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:24:08.233189  300121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:24:08.233201  300121 out.go:358] Setting ErrFile to fd 2...
	I0913 18:24:08.233206  300121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:24:08.233443  300121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
	W0913 18:24:08.233576  300121 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19636-294721/.minikube/config/config.json: open /home/jenkins/minikube-integration/19636-294721/.minikube/config/config.json: no such file or directory
	I0913 18:24:08.233989  300121 out.go:352] Setting JSON to true
	I0913 18:24:08.234848  300121 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7596,"bootTime":1726244253,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0913 18:24:08.234922  300121 start.go:139] virtualization:  
	I0913 18:24:08.238238  300121 out.go:97] [download-only-776826] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0913 18:24:08.238373  300121 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19636-294721/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 18:24:08.238414  300121 notify.go:220] Checking for updates...
	I0913 18:24:08.240635  300121 out.go:169] MINIKUBE_LOCATION=19636
	I0913 18:24:08.242731  300121 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:24:08.245005  300121 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	I0913 18:24:08.247283  300121 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	I0913 18:24:08.249152  300121 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0913 18:24:08.253629  300121 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 18:24:08.253857  300121 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:24:08.281460  300121 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 18:24:08.281576  300121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:24:08.349224  300121 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 18:24:08.339763461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:24:08.349346  300121 docker.go:318] overlay module found
	I0913 18:24:08.351438  300121 out.go:97] Using the docker driver based on user configuration
	I0913 18:24:08.351469  300121 start.go:297] selected driver: docker
	I0913 18:24:08.351475  300121 start.go:901] validating driver "docker" against <nil>
	I0913 18:24:08.351591  300121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:24:08.403525  300121 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 18:24:08.39384114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:24:08.403736  300121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:24:08.404098  300121 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0913 18:24:08.404263  300121 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 18:24:08.406584  300121 out.go:169] Using Docker driver with root privileges
	I0913 18:24:08.408718  300121 cni.go:84] Creating CNI manager for ""
	I0913 18:24:08.408776  300121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0913 18:24:08.408788  300121 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 18:24:08.408871  300121 start.go:340] cluster config:
	{Name:download-only-776826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-776826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:24:08.410801  300121 out.go:97] Starting "download-only-776826" primary control-plane node in "download-only-776826" cluster
	I0913 18:24:08.410818  300121 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0913 18:24:08.412638  300121 out.go:97] Pulling base image v0.0.45-1726193793-19634 ...
	I0913 18:24:08.412679  300121 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0913 18:24:08.412787  300121 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local docker daemon
	I0913 18:24:08.428205  300121 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e to local cache
	I0913 18:24:08.428855  300121 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local cache directory
	I0913 18:24:08.428961  300121 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e to local cache
	I0913 18:24:08.474789  300121 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0913 18:24:08.474819  300121 cache.go:56] Caching tarball of preloaded images
	I0913 18:24:08.474984  300121 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0913 18:24:08.477544  300121 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 18:24:08.477568  300121 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0913 18:24:08.564866  300121 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19636-294721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0913 18:24:14.396331  300121 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0913 18:24:14.396526  300121 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19636-294721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0913 18:24:15.535321  300121 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0913 18:24:15.535757  300121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/download-only-776826/config.json ...
	I0913 18:24:15.535794  300121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/download-only-776826/config.json: {Name:mkb124a14c1ba756f6cf21da70c034eaaf72c922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:24:15.536392  300121 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0913 18:24:15.536976  300121 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19636-294721/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-776826 host does not exist
	  To start a cluster, run: "minikube start -p download-only-776826"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-776826
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-021767 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-021767 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.347717029s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-021767
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-021767: exit status 85 (69.74633ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-776826 | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC |                     |
	|         | -p download-only-776826        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	| delete  | -p download-only-776826        | download-only-776826 | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	| start   | -o=json --download-only        | download-only-021767 | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC |                     |
	|         | -p download-only-021767        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:24:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:24:21.372168  300324 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:24:21.372379  300324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:24:21.372410  300324 out.go:358] Setting ErrFile to fd 2...
	I0913 18:24:21.372432  300324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:24:21.372703  300324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
	I0913 18:24:21.373154  300324 out.go:352] Setting JSON to true
	I0913 18:24:21.374092  300324 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7609,"bootTime":1726244253,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0913 18:24:21.374199  300324 start.go:139] virtualization:  
	I0913 18:24:21.377135  300324 out.go:97] [download-only-021767] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0913 18:24:21.377358  300324 notify.go:220] Checking for updates...
	I0913 18:24:21.379675  300324 out.go:169] MINIKUBE_LOCATION=19636
	I0913 18:24:21.381570  300324 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:24:21.383338  300324 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	I0913 18:24:21.385616  300324 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	I0913 18:24:21.387690  300324 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0913 18:24:21.391534  300324 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 18:24:21.391814  300324 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:24:21.419949  300324 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 18:24:21.420067  300324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:24:21.476826  300324 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-13 18:24:21.466913157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:24:21.476942  300324 docker.go:318] overlay module found
	I0913 18:24:21.478798  300324 out.go:97] Using the docker driver based on user configuration
	I0913 18:24:21.478839  300324 start.go:297] selected driver: docker
	I0913 18:24:21.478847  300324 start.go:901] validating driver "docker" against <nil>
	I0913 18:24:21.478946  300324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:24:21.530052  300324 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-13 18:24:21.520719447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:24:21.530210  300324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:24:21.530494  300324 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0913 18:24:21.530646  300324 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 18:24:21.533319  300324 out.go:169] Using Docker driver with root privileges
	I0913 18:24:21.535188  300324 cni.go:84] Creating CNI manager for ""
	I0913 18:24:21.535253  300324 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0913 18:24:21.535269  300324 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 18:24:21.535353  300324 start.go:340] cluster config:
	{Name:download-only-021767 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-021767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:24:21.537450  300324 out.go:97] Starting "download-only-021767" primary control-plane node in "download-only-021767" cluster
	I0913 18:24:21.537470  300324 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0913 18:24:21.539562  300324 out.go:97] Pulling base image v0.0.45-1726193793-19634 ...
	I0913 18:24:21.539596  300324 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0913 18:24:21.539627  300324 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local docker daemon
	I0913 18:24:21.554262  300324 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e to local cache
	I0913 18:24:21.554398  300324 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local cache directory
	I0913 18:24:21.554431  300324 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local cache directory, skipping pull
	I0913 18:24:21.554437  300324 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e exists in cache, skipping pull
	I0913 18:24:21.554447  300324 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e as a tarball
	I0913 18:24:21.598294  300324 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0913 18:24:21.598321  300324 cache.go:56] Caching tarball of preloaded images
	I0913 18:24:21.598488  300324 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0913 18:24:21.600721  300324 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0913 18:24:21.600750  300324 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0913 18:24:21.692591  300324 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19636-294721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0913 18:24:26.042727  300324 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0913 18:24:26.042867  300324 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19636-294721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-021767 host does not exist
	  To start a cluster, run: "minikube start -p download-only-021767"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-021767
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-832553 --alsologtostderr --binary-mirror http://127.0.0.1:39563 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-832553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-832553
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-365496
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-365496: exit status 85 (86.398549ms)

                                                
                                                
-- stdout --
	* Profile "addons-365496" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-365496"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-365496
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-365496: exit status 85 (72.392407ms)

                                                
                                                
-- stdout --
	* Profile "addons-365496" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-365496"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (219.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-365496 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-365496 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m39.89610811s)
--- PASS: TestAddons/Setup (219.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-365496 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-365496 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.698756ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-97mff" [74f2b527-02d3-446a-b0e7-cb8eab4b50e9] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007101545s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wlc96" [75b88ef1-69ac-4b5a-bd2a-9dbaede32979] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011293365s
addons_test.go:338: (dbg) Run:  kubectl --context addons-365496 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-365496 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-365496 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.013274325s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 ip
2024/09/13 18:32:04 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-365496 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-365496 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-365496 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e0e9555a-b233-4471-9e49-0cc69781d86d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e0e9555a-b233-4471-9e49-0cc69781d86d] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003956032s
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-365496 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-365496 addons disable ingress-dns --alsologtostderr -v=1: (1.224600785s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-365496 addons disable ingress --alsologtostderr -v=1: (7.843989252s)
--- PASS: TestAddons/parallel/Ingress (18.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.09s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pdh2j" [afd3e4e8-b87f-4a18-9e27-d7e8ed94acb3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.024800015s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-365496
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-365496: (6.064082558s)
--- PASS: TestAddons/parallel/InspektorGadget (11.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.579602ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zw9g7" [8c940fcd-9cef-4740-9433-0b9ccb893566] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008585031s
addons_test.go:413: (dbg) Run:  kubectl --context addons-365496 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 5.517948ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-365496 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-365496 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5562f2d9-d204-4cd3-99c7-7e32856563ed] Pending
helpers_test.go:344: "task-pv-pod" [5562f2d9-d204-4cd3-99c7-7e32856563ed] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5562f2d9-d204-4cd3-99c7-7e32856563ed] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.005799716s
addons_test.go:528: (dbg) Run:  kubectl --context addons-365496 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-365496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-365496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-365496 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-365496 delete pod task-pv-pod: (1.006146013s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-365496 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-365496 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-365496 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4bcfcb3c-5b1d-472e-b144-5a061b15bfc6] Pending
helpers_test.go:344: "task-pv-pod-restore" [4bcfcb3c-5b1d-472e-b144-5a061b15bfc6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4bcfcb3c-5b1d-472e-b144-5a061b15bfc6] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003789092s
addons_test.go:570: (dbg) Run:  kubectl --context addons-365496 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-365496 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-365496 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-365496 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.767700138s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (37.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-365496 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-365496 --alsologtostderr -v=1: (1.600453611s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-mngst" [ac1c2c07-a06a-4d16-9609-96fdb3166ff7] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-mngst" [ac1c2c07-a06a-4d16-9609-96fdb3166ff7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-mngst" [ac1c2c07-a06a-4d16-9609-96fdb3166ff7] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004545074s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-365496 addons disable headlamp --alsologtostderr -v=1: (5.808569169s)
--- PASS: TestAddons/parallel/Headlamp (17.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-p8cr7" [cbd911e9-bbc1-4981-aa81-8e8193899b78] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00314452s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-365496
--- PASS: TestAddons/parallel/CloudSpanner (6.79s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-365496 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-365496 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-365496 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9bac18d9-8b4f-4acb-9706-da682beedd74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9bac18d9-8b4f-4acb-9706-da682beedd74] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9bac18d9-8b4f-4acb-9706-da682beedd74] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.008641182s
addons_test.go:938: (dbg) Run:  kubectl --context addons-365496 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 ssh "cat /opt/local-path-provisioner/pvc-ade02059-8596-486a-9672-c9ca807bd8e1_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-365496 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-365496 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-365496 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.669432894s)
--- PASS: TestAddons/parallel/LocalPath (53.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8p94d" [0dce0d3d-b978-40ef-8ed8-f936aece4e07] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004689226s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-365496
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-dbp2p" [8345b788-59c0-4669-9aae-788b4dc4032c] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00362413s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-365496 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-365496 addons disable yakd --alsologtostderr -v=1: (5.903313556s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-365496
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-365496: (12.07070118s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-365496
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-365496
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-365496
--- PASS: TestAddons/StoppedEnableDisable (12.35s)

                                                
                                    
x
+
TestCertOptions (35.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-314119 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-314119 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.075266461s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-314119 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-314119 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-314119 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-314119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-314119
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-314119: (1.990124964s)
--- PASS: TestCertOptions (35.76s)

                                                
                                    
x
+
TestCertExpiration (228.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-721077 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-721077 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.21032297s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-721077 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-721077 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.112361536s)
helpers_test.go:175: Cleaning up "cert-expiration-721077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-721077
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-721077: (2.273983145s)
--- PASS: TestCertExpiration (228.60s)

                                                
                                    
x
+
TestForceSystemdFlag (40.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-399425 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0913 19:11:12.566387  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-399425 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.06722557s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-399425 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-399425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-399425
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-399425: (2.024732433s)
--- PASS: TestForceSystemdFlag (40.46s)

                                                
                                    
x
+
TestForceSystemdEnv (38.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-922695 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-922695 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.064730549s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-922695 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-922695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-922695
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-922695: (2.073830515s)
--- PASS: TestForceSystemdEnv (38.43s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.04s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-057707 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-057707 --driver=docker  --container-runtime=containerd: (31.343143415s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-057707"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-057707": (1.055171751s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NQ2FvB8AXg4C/agent.320157" SSH_AGENT_PID="320158" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NQ2FvB8AXg4C/agent.320157" SSH_AGENT_PID="320158" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NQ2FvB8AXg4C/agent.320157" SSH_AGENT_PID="320158" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.223879271s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NQ2FvB8AXg4C/agent.320157" SSH_AGENT_PID="320158" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-057707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-057707
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-057707: (1.952045549s)
--- PASS: TestDockerEnvContainerd (47.04s)

                                                
                                    
x
+
TestErrorSpam/setup (31.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-901802 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-901802 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-901802 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-901802 --driver=docker  --container-runtime=containerd: (31.227484211s)
--- PASS: TestErrorSpam/setup (31.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 status
--- PASS: TestErrorSpam/status (1.02s)

                                                
                                    
x
+
TestErrorSpam/pause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 pause
--- PASS: TestErrorSpam/pause (1.80s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 unpause
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 stop: (1.266398525s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-901802 --log_dir /tmp/nospam-901802 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19636-294721/.minikube/files/etc/test/nested/copy/300115/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-910777 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-910777 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.981643695s)
--- PASS: TestFunctional/serial/StartWithProxy (51.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-910777 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-910777 --alsologtostderr -v=8: (6.390369151s)
functional_test.go:663: soft start took 6.39089632s for "functional-910777" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-910777 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 cache add registry.k8s.io/pause:3.1: (1.481271536s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 cache add registry.k8s.io/pause:3.3: (7.749207967s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 cache add registry.k8s.io/pause:latest: (1.338158602s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-910777 /tmp/TestFunctionalserialCacheCmdcacheadd_local3810653958/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 cache add minikube-local-cache-test:functional-910777
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 cache delete minikube-local-cache-test:functional-910777
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-910777
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-910777 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.32493ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 cache reload: (1.06001287s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 kubectl -- --context functional-910777 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-910777 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-910777 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-910777 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.632393931s)
functional_test.go:761: restart took 49.632532751s for "functional-910777" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (49.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-910777 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 logs: (1.710097154s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 logs --file /tmp/TestFunctionalserialLogsFileCmd3725970333/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 logs --file /tmp/TestFunctionalserialLogsFileCmd3725970333/001/logs.txt: (1.710583936s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.53s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-910777 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-910777
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-910777: exit status 115 (687.436532ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30192 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-910777 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-910777 config get cpus: exit status 14 (64.570044ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-910777 config get cpus: exit status 14 (79.928336ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-910777 --alsologtostderr -v=1]
E0913 18:38:09.499809  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:38:09.506741  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:38:09.518117  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:38:09.539481  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:38:09.580841  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:38:09.662565  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:38:09.824326  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-910777 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 336038: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-910777 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-910777 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (179.614028ms)

                                                
                                                
-- stdout --
	* [functional-910777] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:38:07.562442  335740 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:38:07.562625  335740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:38:07.562656  335740 out.go:358] Setting ErrFile to fd 2...
	I0913 18:38:07.562679  335740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:38:07.562964  335740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
	I0913 18:38:07.563455  335740 out.go:352] Setting JSON to false
	I0913 18:38:07.564475  335740 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8435,"bootTime":1726244253,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0913 18:38:07.564603  335740 start.go:139] virtualization:  
	I0913 18:38:07.567245  335740 out.go:177] * [functional-910777] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0913 18:38:07.569879  335740 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:38:07.569995  335740 notify.go:220] Checking for updates...
	I0913 18:38:07.573728  335740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:38:07.576066  335740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	I0913 18:38:07.577906  335740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	I0913 18:38:07.583130  335740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0913 18:38:07.585364  335740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:38:07.588545  335740 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0913 18:38:07.589534  335740 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:38:07.613809  335740 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 18:38:07.613947  335740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:38:07.674790  335740 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 18:38:07.664819708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:38:07.674898  335740 docker.go:318] overlay module found
	I0913 18:38:07.678158  335740 out.go:177] * Using the docker driver based on existing profile
	I0913 18:38:07.680316  335740 start.go:297] selected driver: docker
	I0913 18:38:07.680337  335740 start.go:901] validating driver "docker" against &{Name:functional-910777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-910777 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:38:07.680460  335740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:38:07.683399  335740 out.go:201] 
	W0913 18:38:07.685348  335740 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 18:38:07.687270  335740 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-910777 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-910777 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-910777 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (183.643867ms)

                                                
                                                
-- stdout --
	* [functional-910777] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:38:07.389082  335695 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:38:07.389253  335695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:38:07.389274  335695 out.go:358] Setting ErrFile to fd 2...
	I0913 18:38:07.389292  335695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:38:07.390092  335695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
	I0913 18:38:07.390544  335695 out.go:352] Setting JSON to false
	I0913 18:38:07.391575  335695 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8435,"bootTime":1726244253,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0913 18:38:07.391687  335695 start.go:139] virtualization:  
	I0913 18:38:07.394647  335695 out.go:177] * [functional-910777] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0913 18:38:07.397297  335695 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:38:07.397445  335695 notify.go:220] Checking for updates...
	I0913 18:38:07.400904  335695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:38:07.402702  335695 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	I0913 18:38:07.404781  335695 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	I0913 18:38:07.407158  335695 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0913 18:38:07.408940  335695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:38:07.411771  335695 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0913 18:38:07.412387  335695 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:38:07.436043  335695 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 18:38:07.436153  335695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:38:07.495455  335695 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 18:38:07.483627874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:38:07.495577  335695 docker.go:318] overlay module found
	I0913 18:38:07.497838  335695 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0913 18:38:07.499722  335695 start.go:297] selected driver: docker
	I0913 18:38:07.499743  335695 start.go:901] validating driver "docker" against &{Name:functional-910777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-910777 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:38:07.499918  335695 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:38:07.502442  335695 out.go:201] 
	W0913 18:38:07.504182  335695 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 18:38:07.506103  335695 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-910777 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-910777 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-t8xmn" [84adb2fb-996d-409f-856b-4120196ba61a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-t8xmn" [84adb2fb-996d-409f-856b-4120196ba61a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003852171s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32365
functional_test.go:1675: http://192.168.49.2:32365: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-t8xmn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32365
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6fbc742f-d8b5-4f3d-bc08-59ebde277bc6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003261863s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-910777 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-910777 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-910777 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-910777 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b51ab619-d634-4153-a531-238c56f03243] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b51ab619-d634-4153-a531-238c56f03243] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003549224s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-910777 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-910777 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-910777 delete -f testdata/storage-provisioner/pod.yaml: (1.107634408s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-910777 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a5377f54-470e-4dfb-8054-f1048b48f04f] Pending
helpers_test.go:344: "sp-pod" [a5377f54-470e-4dfb-8054-f1048b48f04f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a5377f54-470e-4dfb-8054-f1048b48f04f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004095537s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-910777 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh -n functional-910777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 cp functional-910777:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1811077694/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh -n functional-910777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh -n functional-910777 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/300115/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo cat /etc/test/nested/copy/300115/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/300115.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo cat /etc/ssl/certs/300115.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/300115.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo cat /usr/share/ca-certificates/300115.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3001152.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo cat /etc/ssl/certs/3001152.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3001152.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo cat /usr/share/ca-certificates/3001152.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-910777 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-910777 ssh "sudo systemctl is-active docker": exit status 1 (445.700776ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-910777 ssh "sudo systemctl is-active crio": exit status 1 (327.171053ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 version -o=json --components: (1.156364747s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-910777 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-910777
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-910777
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-910777 image ls --format short --alsologtostderr:
I0913 18:38:17.570552  337444 out.go:345] Setting OutFile to fd 1 ...
I0913 18:38:17.570769  337444 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:38:17.570792  337444 out.go:358] Setting ErrFile to fd 2...
I0913 18:38:17.570810  337444 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:38:17.571063  337444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
I0913 18:38:17.571743  337444 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0913 18:38:17.571924  337444 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0913 18:38:17.572455  337444 cli_runner.go:164] Run: docker container inspect functional-910777 --format={{.State.Status}}
I0913 18:38:17.606472  337444 ssh_runner.go:195] Run: systemctl --version
I0913 18:38:17.606522  337444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-910777
I0913 18:38:17.624382  337444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/functional-910777/id_rsa Username:docker}
I0913 18:38:17.720546  337444 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-910777 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-910777  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-910777  | sha256:e3a539 | 990B   |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-910777 image ls --format table --alsologtostderr:
I0913 18:38:21.164446  337740 out.go:345] Setting OutFile to fd 1 ...
I0913 18:38:21.164647  337740 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:38:21.164675  337740 out.go:358] Setting ErrFile to fd 2...
I0913 18:38:21.164695  337740 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:38:21.164974  337740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
I0913 18:38:21.165666  337740 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0913 18:38:21.165848  337740 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0913 18:38:21.166354  337740 cli_runner.go:164] Run: docker container inspect functional-910777 --format={{.State.Status}}
I0913 18:38:21.193778  337740 ssh_runner.go:195] Run: systemctl --version
I0913 18:38:21.193860  337740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-910777
I0913 18:38:21.227634  337740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/functional-910777/id_rsa Username:docker}
I0913 18:38:21.324927  337740 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-910777 image ls --format json --alsologtostderr:
[{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["g
cr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-910777"],"size":"217
3567"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:e3a5396a903b13dcb47b40c4b44fb071fecfcfcd3bf6ec7f4a2c28e45b650176
","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-910777"],"size":"990"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDi
gests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-910777 image ls --format json --alsologtostderr:
I0913 18:38:20.888608  337707 out.go:345] Setting OutFile to fd 1 ...
I0913 18:38:20.888787  337707 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:38:20.888816  337707 out.go:358] Setting ErrFile to fd 2...
I0913 18:38:20.888842  337707 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:38:20.890024  337707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
I0913 18:38:20.891918  337707 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0913 18:38:20.892117  337707 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0913 18:38:20.892958  337707 cli_runner.go:164] Run: docker container inspect functional-910777 --format={{.State.Status}}
I0913 18:38:20.912729  337707 ssh_runner.go:195] Run: systemctl --version
I0913 18:38:20.912786  337707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-910777
I0913 18:38:20.935763  337707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/functional-910777/id_rsa Username:docker}
I0913 18:38:21.039048  337707 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-910777 image ls --format yaml --alsologtostderr:
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:e3a5396a903b13dcb47b40c4b44fb071fecfcfcd3bf6ec7f4a2c28e45b650176
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-910777
size: "990"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-910777
size: "2173567"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-910777 image ls --format yaml --alsologtostderr:
I0913 18:38:17.847268  337476 out.go:345] Setting OutFile to fd 1 ...
I0913 18:38:17.847487  337476 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:38:17.847510  337476 out.go:358] Setting ErrFile to fd 2...
I0913 18:38:17.847533  337476 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:38:17.847797  337476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
I0913 18:38:17.848466  337476 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0913 18:38:17.848613  337476 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0913 18:38:17.849102  337476 cli_runner.go:164] Run: docker container inspect functional-910777 --format={{.State.Status}}
I0913 18:38:17.868243  337476 ssh_runner.go:195] Run: systemctl --version
I0913 18:38:17.868308  337476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-910777
I0913 18:38:17.885321  337476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/functional-910777/id_rsa Username:docker}
I0913 18:38:17.981051  337476 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-910777 ssh pgrep buildkitd: exit status 1 (332.21046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image build -t localhost/my-image:functional-910777 testdata/build --alsologtostderr
E0913 18:38:19.753074  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
2024/09/13 18:38:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 image build -t localhost/my-image:functional-910777 testdata/build --alsologtostderr: (3.10312943s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-910777 image build -t localhost/my-image:functional-910777 testdata/build --alsologtostderr:
I0913 18:38:18.439474  337569 out.go:345] Setting OutFile to fd 1 ...
I0913 18:38:18.440055  337569 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:38:18.440068  337569 out.go:358] Setting ErrFile to fd 2...
I0913 18:38:18.440074  337569 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:38:18.440474  337569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
I0913 18:38:18.441417  337569 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0913 18:38:18.442080  337569 config.go:182] Loaded profile config "functional-910777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0913 18:38:18.442610  337569 cli_runner.go:164] Run: docker container inspect functional-910777 --format={{.State.Status}}
I0913 18:38:18.459660  337569 ssh_runner.go:195] Run: systemctl --version
I0913 18:38:18.459732  337569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-910777
I0913 18:38:18.476554  337569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/functional-910777/id_rsa Username:docker}
I0913 18:38:18.576541  337569 build_images.go:161] Building image from path: /tmp/build.2491444255.tar
I0913 18:38:18.576612  337569 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0913 18:38:18.586065  337569 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2491444255.tar
I0913 18:38:18.590059  337569 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2491444255.tar: stat -c "%s %y" /var/lib/minikube/build/build.2491444255.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2491444255.tar': No such file or directory
I0913 18:38:18.590090  337569 ssh_runner.go:362] scp /tmp/build.2491444255.tar --> /var/lib/minikube/build/build.2491444255.tar (3072 bytes)
I0913 18:38:18.615584  337569 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2491444255
I0913 18:38:18.624943  337569 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2491444255 -xf /var/lib/minikube/build/build.2491444255.tar
I0913 18:38:18.634439  337569 containerd.go:394] Building image: /var/lib/minikube/build/build.2491444255
I0913 18:38:18.634554  337569 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2491444255 --local dockerfile=/var/lib/minikube/build/build.2491444255 --output type=image,name=localhost/my-image:functional-910777
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:72f9530cc9ae24ad99e66f851440355e7bec492ea42c5f2b46a8a9dd34cd4da0
#8 exporting manifest sha256:72f9530cc9ae24ad99e66f851440355e7bec492ea42c5f2b46a8a9dd34cd4da0 0.0s done
#8 exporting config sha256:8b295bb673a12cc0dec09008627d0aaf544a503fffe74a063db0f0d18cfd4efa 0.0s done
#8 naming to localhost/my-image:functional-910777 done
#8 DONE 0.1s
I0913 18:38:21.462095  337569 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2491444255 --local dockerfile=/var/lib/minikube/build/build.2491444255 --output type=image,name=localhost/my-image:functional-910777: (2.827508449s)
I0913 18:38:21.462202  337569 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2491444255
I0913 18:38:21.471935  337569 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2491444255.tar
I0913 18:38:21.481959  337569 build_images.go:217] Built localhost/my-image:functional-910777 from /tmp/build.2491444255.tar
I0913 18:38:21.481991  337569 build_images.go:133] succeeded building to: functional-910777
I0913 18:38:21.481996  337569 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-910777
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image load --daemon kicbase/echo-server:functional-910777 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 image load --daemon kicbase/echo-server:functional-910777 --alsologtostderr: (1.207778781s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image load --daemon kicbase/echo-server:functional-910777 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 image load --daemon kicbase/echo-server:functional-910777 --alsologtostderr: (1.074191918s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-910777
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image load --daemon kicbase/echo-server:functional-910777 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-910777 image load --daemon kicbase/echo-server:functional-910777 --alsologtostderr: (1.214826298s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-910777 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-910777 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-910777 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-910777 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 333090: os: process already finished
helpers_test.go:502: unable to terminate pid 332976: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-910777 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-910777 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0c3ad131-c43d-4eda-9f0c-f6dc518ddf4e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0c3ad131-c43d-4eda-9f0c-f6dc518ddf4e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.005316173s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image save kicbase/echo-server:functional-910777 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image rm kicbase/echo-server:functional-910777 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-910777
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 image save --daemon kicbase/echo-server:functional-910777 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-910777
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-910777 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.28.68 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-910777 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-910777 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-910777 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-m9mhx" [442de7d1-8f95-4100-a59f-520f5e52448a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-m9mhx" [442de7d1-8f95-4100-a59f-520f5e52448a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003941442s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "341.868318ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "54.836003ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "322.085845ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "56.261072ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdany-port1550536943/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726252683694336556" to /tmp/TestFunctionalparallelMountCmdany-port1550536943/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726252683694336556" to /tmp/TestFunctionalparallelMountCmdany-port1550536943/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726252683694336556" to /tmp/TestFunctionalparallelMountCmdany-port1550536943/001/test-1726252683694336556
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-910777 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (465.579918ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 13 18:38 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 13 18:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 13 18:38 test-1726252683694336556
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh cat /mount-9p/test-1726252683694336556
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-910777 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0ad535fc-826c-4124-964a-f61c64a995cc] Pending
helpers_test.go:344: "busybox-mount" [0ad535fc-826c-4124-964a-f61c64a995cc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0ad535fc-826c-4124-964a-f61c64a995cc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0913 18:38:10.146022  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:38:10.788286  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [0ad535fc-826c-4124-964a-f61c64a995cc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004398167s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-910777 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdany-port1550536943/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 service list -o json
functional_test.go:1494: Took "518.762515ms" to run "out/minikube-linux-arm64 -p functional-910777 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32459
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32459
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdspecific-port2683529879/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "findmnt -T /mount-9p | grep 9p"
E0913 18:38:12.070420  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-910777 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.089978ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdspecific-port2683529879/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-910777 ssh "sudo umount -f /mount-9p": exit status 1 (298.548503ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-910777 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdspecific-port2683529879/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1290359750/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1290359750/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1290359750/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "findmnt -T" /mount1
E0913 18:38:14.631690  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-910777 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-910777 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1290359750/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1290359750/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-910777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1290359750/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-910777
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-910777
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-910777
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-359542 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0913 18:38:29.994816  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:38:50.476236  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:31.438551  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-359542 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m2.315612189s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (123.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (37.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- rollout status deployment/busybox
E0913 18:40:53.360026  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-359542 -- rollout status deployment/busybox: (34.800940608s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-6c6gs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-9jbdf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-x44pc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-6c6gs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-9jbdf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-x44pc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-6c6gs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-9jbdf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-x44pc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (37.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-6c6gs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-6c6gs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-9jbdf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-9jbdf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-x44pc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-359542 -- exec busybox-7dff88458-x44pc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-359542 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-359542 -v=7 --alsologtostderr: (23.406690436s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr: (1.054327131s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-359542 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-359542 status --output json -v=7 --alsologtostderr: (1.064631345s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp testdata/cp-test.txt ha-359542:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1346220953/001/cp-test_ha-359542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542:/home/docker/cp-test.txt ha-359542-m02:/home/docker/cp-test_ha-359542_ha-359542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m02 "sudo cat /home/docker/cp-test_ha-359542_ha-359542-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542:/home/docker/cp-test.txt ha-359542-m03:/home/docker/cp-test_ha-359542_ha-359542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m03 "sudo cat /home/docker/cp-test_ha-359542_ha-359542-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542:/home/docker/cp-test.txt ha-359542-m04:/home/docker/cp-test_ha-359542_ha-359542-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m04 "sudo cat /home/docker/cp-test_ha-359542_ha-359542-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp testdata/cp-test.txt ha-359542-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1346220953/001/cp-test_ha-359542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m02:/home/docker/cp-test.txt ha-359542:/home/docker/cp-test_ha-359542-m02_ha-359542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542 "sudo cat /home/docker/cp-test_ha-359542-m02_ha-359542.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m02:/home/docker/cp-test.txt ha-359542-m03:/home/docker/cp-test_ha-359542-m02_ha-359542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m03 "sudo cat /home/docker/cp-test_ha-359542-m02_ha-359542-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m02:/home/docker/cp-test.txt ha-359542-m04:/home/docker/cp-test_ha-359542-m02_ha-359542-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m04 "sudo cat /home/docker/cp-test_ha-359542-m02_ha-359542-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp testdata/cp-test.txt ha-359542-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1346220953/001/cp-test_ha-359542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m03:/home/docker/cp-test.txt ha-359542:/home/docker/cp-test_ha-359542-m03_ha-359542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542 "sudo cat /home/docker/cp-test_ha-359542-m03_ha-359542.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m03:/home/docker/cp-test.txt ha-359542-m02:/home/docker/cp-test_ha-359542-m03_ha-359542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m02 "sudo cat /home/docker/cp-test_ha-359542-m03_ha-359542-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m03:/home/docker/cp-test.txt ha-359542-m04:/home/docker/cp-test_ha-359542-m03_ha-359542-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m04 "sudo cat /home/docker/cp-test_ha-359542-m03_ha-359542-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp testdata/cp-test.txt ha-359542-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1346220953/001/cp-test_ha-359542-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m04:/home/docker/cp-test.txt ha-359542:/home/docker/cp-test_ha-359542-m04_ha-359542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542 "sudo cat /home/docker/cp-test_ha-359542-m04_ha-359542.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m04:/home/docker/cp-test.txt ha-359542-m02:/home/docker/cp-test_ha-359542-m04_ha-359542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m02 "sudo cat /home/docker/cp-test_ha-359542-m04_ha-359542-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 cp ha-359542-m04:/home/docker/cp-test.txt ha-359542-m03:/home/docker/cp-test_ha-359542-m04_ha-359542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 ssh -n ha-359542-m03 "sudo cat /home/docker/cp-test_ha-359542-m04_ha-359542-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-359542 node stop m02 -v=7 --alsologtostderr: (12.141920321s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr: exit status 7 (793.729112ms)

                                                
                                                
-- stdout --
	ha-359542
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-359542-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-359542-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-359542-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:42:04.749806  353940 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:42:04.750051  353940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:42:04.750065  353940 out.go:358] Setting ErrFile to fd 2...
	I0913 18:42:04.750072  353940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:42:04.750314  353940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
	I0913 18:42:04.750563  353940 out.go:352] Setting JSON to false
	I0913 18:42:04.750604  353940 mustload.go:65] Loading cluster: ha-359542
	I0913 18:42:04.750704  353940 notify.go:220] Checking for updates...
	I0913 18:42:04.751024  353940 config.go:182] Loaded profile config "ha-359542": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0913 18:42:04.751035  353940 status.go:255] checking status of ha-359542 ...
	I0913 18:42:04.751597  353940 cli_runner.go:164] Run: docker container inspect ha-359542 --format={{.State.Status}}
	I0913 18:42:04.782261  353940 status.go:330] ha-359542 host status = "Running" (err=<nil>)
	I0913 18:42:04.782290  353940 host.go:66] Checking if "ha-359542" exists ...
	I0913 18:42:04.782585  353940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-359542
	I0913 18:42:04.831665  353940 host.go:66] Checking if "ha-359542" exists ...
	I0913 18:42:04.832158  353940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:42:04.832218  353940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-359542
	I0913 18:42:04.852776  353940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/ha-359542/id_rsa Username:docker}
	I0913 18:42:04.958898  353940 ssh_runner.go:195] Run: systemctl --version
	I0913 18:42:04.963557  353940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:42:04.978690  353940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:42:05.042557  353940 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-13 18:42:05.031322688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:42:05.043217  353940 kubeconfig.go:125] found "ha-359542" server: "https://192.168.49.254:8443"
	I0913 18:42:05.043266  353940 api_server.go:166] Checking apiserver status ...
	I0913 18:42:05.043313  353940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:42:05.056232  353940 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	I0913 18:42:05.066598  353940 api_server.go:182] apiserver freezer: "12:freezer:/docker/2eb666fa73025a6b5033cab641115f43153d49713313be5db32d7f32f6d54ee7/kubepods/burstable/podb7dc04bd66512f5e2ab8243762619410/5ebe4a964a860f614079f3720f1d32ec0f93a3d20f85e047e40be95e67ed5c50"
	I0913 18:42:05.066674  353940 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2eb666fa73025a6b5033cab641115f43153d49713313be5db32d7f32f6d54ee7/kubepods/burstable/podb7dc04bd66512f5e2ab8243762619410/5ebe4a964a860f614079f3720f1d32ec0f93a3d20f85e047e40be95e67ed5c50/freezer.state
	I0913 18:42:05.076201  353940 api_server.go:204] freezer state: "THAWED"
	I0913 18:42:05.076238  353940 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0913 18:42:05.084693  353940 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0913 18:42:05.084739  353940 status.go:422] ha-359542 apiserver status = Running (err=<nil>)
	I0913 18:42:05.084756  353940 status.go:257] ha-359542 status: &{Name:ha-359542 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:42:05.084782  353940 status.go:255] checking status of ha-359542-m02 ...
	I0913 18:42:05.085304  353940 cli_runner.go:164] Run: docker container inspect ha-359542-m02 --format={{.State.Status}}
	I0913 18:42:05.103518  353940 status.go:330] ha-359542-m02 host status = "Stopped" (err=<nil>)
	I0913 18:42:05.103542  353940 status.go:343] host is not running, skipping remaining checks
	I0913 18:42:05.103549  353940 status.go:257] ha-359542-m02 status: &{Name:ha-359542-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:42:05.103571  353940 status.go:255] checking status of ha-359542-m03 ...
	I0913 18:42:05.104012  353940 cli_runner.go:164] Run: docker container inspect ha-359542-m03 --format={{.State.Status}}
	I0913 18:42:05.122313  353940 status.go:330] ha-359542-m03 host status = "Running" (err=<nil>)
	I0913 18:42:05.122338  353940 host.go:66] Checking if "ha-359542-m03" exists ...
	I0913 18:42:05.122803  353940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-359542-m03
	I0913 18:42:05.140867  353940 host.go:66] Checking if "ha-359542-m03" exists ...
	I0913 18:42:05.141189  353940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:42:05.141240  353940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-359542-m03
	I0913 18:42:05.158971  353940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/ha-359542-m03/id_rsa Username:docker}
	I0913 18:42:05.256892  353940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:42:05.271251  353940 kubeconfig.go:125] found "ha-359542" server: "https://192.168.49.254:8443"
	I0913 18:42:05.271279  353940 api_server.go:166] Checking apiserver status ...
	I0913 18:42:05.271350  353940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:42:05.282982  353940 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1375/cgroup
	I0913 18:42:05.293864  353940 api_server.go:182] apiserver freezer: "12:freezer:/docker/19b83063e784a2d2a59fc91ac55e251cfc4d5634baa190598b2e32214187d5d1/kubepods/burstable/pod7ad7cdc1cbaf80d572723230e0431fe1/68a777bb8141acf49cf8c7dedcbda614ff8c0e07bfd4735e1a1626bc1924e424"
	I0913 18:42:05.293936  353940 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/19b83063e784a2d2a59fc91ac55e251cfc4d5634baa190598b2e32214187d5d1/kubepods/burstable/pod7ad7cdc1cbaf80d572723230e0431fe1/68a777bb8141acf49cf8c7dedcbda614ff8c0e07bfd4735e1a1626bc1924e424/freezer.state
	I0913 18:42:05.306082  353940 api_server.go:204] freezer state: "THAWED"
	I0913 18:42:05.306121  353940 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0913 18:42:05.314182  353940 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0913 18:42:05.314213  353940 status.go:422] ha-359542-m03 apiserver status = Running (err=<nil>)
	I0913 18:42:05.314224  353940 status.go:257] ha-359542-m03 status: &{Name:ha-359542-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:42:05.314266  353940 status.go:255] checking status of ha-359542-m04 ...
	I0913 18:42:05.314602  353940 cli_runner.go:164] Run: docker container inspect ha-359542-m04 --format={{.State.Status}}
	I0913 18:42:05.333443  353940 status.go:330] ha-359542-m04 host status = "Running" (err=<nil>)
	I0913 18:42:05.333471  353940 host.go:66] Checking if "ha-359542-m04" exists ...
	I0913 18:42:05.333785  353940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-359542-m04
	I0913 18:42:05.350305  353940 host.go:66] Checking if "ha-359542-m04" exists ...
	I0913 18:42:05.350620  353940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:42:05.351324  353940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-359542-m04
	I0913 18:42:05.368877  353940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/ha-359542-m04/id_rsa Username:docker}
	I0913 18:42:05.468893  353940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:42:05.481172  353940 status.go:257] ha-359542-m04 status: &{Name:ha-359542-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-359542 node start m02 -v=7 --alsologtostderr: (22.385819887s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (150.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-359542 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-359542 -v=7 --alsologtostderr
E0913 18:42:34.990857  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:34.997622  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:35.021093  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:35.042585  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:35.084050  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:35.165469  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:35.327006  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:35.648678  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:36.290782  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:37.572109  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:40.133495  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:45.255078  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:55.497201  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-359542 -v=7 --alsologtostderr: (37.280764298s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-359542 --wait=true -v=7 --alsologtostderr
E0913 18:43:09.498946  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:15.978526  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:37.201900  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:56.940727  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-359542 --wait=true -v=7 --alsologtostderr: (1m53.032906493s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-359542
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (150.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-359542 node delete m03 -v=7 --alsologtostderr: (9.787747352s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 stop -v=7 --alsologtostderr
E0913 18:45:18.862102  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-359542 stop -v=7 --alsologtostderr: (35.878659558s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr: exit status 7 (122.80971ms)

                                                
                                                
-- stdout --
	ha-359542
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-359542-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-359542-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:45:48.103743  368281 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:45:48.103977  368281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:45:48.104009  368281 out.go:358] Setting ErrFile to fd 2...
	I0913 18:45:48.104030  368281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:45:48.104330  368281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
	I0913 18:45:48.104591  368281 out.go:352] Setting JSON to false
	I0913 18:45:48.104645  368281 mustload.go:65] Loading cluster: ha-359542
	I0913 18:45:48.104732  368281 notify.go:220] Checking for updates...
	I0913 18:45:48.105144  368281 config.go:182] Loaded profile config "ha-359542": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0913 18:45:48.105186  368281 status.go:255] checking status of ha-359542 ...
	I0913 18:45:48.106125  368281 cli_runner.go:164] Run: docker container inspect ha-359542 --format={{.State.Status}}
	I0913 18:45:48.123706  368281 status.go:330] ha-359542 host status = "Stopped" (err=<nil>)
	I0913 18:45:48.123727  368281 status.go:343] host is not running, skipping remaining checks
	I0913 18:45:48.123735  368281 status.go:257] ha-359542 status: &{Name:ha-359542 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:45:48.123760  368281 status.go:255] checking status of ha-359542-m02 ...
	I0913 18:45:48.124189  368281 cli_runner.go:164] Run: docker container inspect ha-359542-m02 --format={{.State.Status}}
	I0913 18:45:48.152948  368281 status.go:330] ha-359542-m02 host status = "Stopped" (err=<nil>)
	I0913 18:45:48.152974  368281 status.go:343] host is not running, skipping remaining checks
	I0913 18:45:48.152982  368281 status.go:257] ha-359542-m02 status: &{Name:ha-359542-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:45:48.153003  368281 status.go:255] checking status of ha-359542-m04 ...
	I0913 18:45:48.153308  368281 cli_runner.go:164] Run: docker container inspect ha-359542-m04 --format={{.State.Status}}
	I0913 18:45:48.170076  368281 status.go:330] ha-359542-m04 host status = "Stopped" (err=<nil>)
	I0913 18:45:48.170101  368281 status.go:343] host is not running, skipping remaining checks
	I0913 18:45:48.170109  368281 status.go:257] ha-359542-m04 status: &{Name:ha-359542-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (75.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-359542 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-359542 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m14.823669967s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (75.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (40.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-359542 --control-plane -v=7 --alsologtostderr
E0913 18:47:34.991400  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-359542 --control-plane -v=7 --alsologtostderr: (39.436917203s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-359542 status -v=7 --alsologtostderr: (1.141165827s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (40.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-625678 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0913 18:48:02.703590  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:48:09.504003  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-625678 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (52.509975599s)
--- PASS: TestJSONOutput/start/Command (52.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-625678 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-625678 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-625678 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-625678 --output=json --user=testUser: (5.753982928s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-150424 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-150424 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.640321ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c72ee09-3ae8-4d27-b8b5-d6e8c685ab94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-150424] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"adbfad5b-a51e-4327-8934-cff329309c64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19636"}}
	{"specversion":"1.0","id":"b3d4767a-6142-4c1f-a366-f69656ab1c71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"468067ce-aa81-479f-9ed8-90387e09511f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig"}}
	{"specversion":"1.0","id":"b482e26d-1dbd-4a20-97f9-2b1edb08901f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube"}}
	{"specversion":"1.0","id":"261c3fdc-900f-4126-a7ac-89ec8b5efcf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"968e88fa-4480-42bd-9aa0-d893e760e2bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2d443aec-ddf7-48ec-afc3-5c93005c288b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-150424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-150424
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-351604 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-351604 --network=: (37.866141343s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-351604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-351604
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-351604: (2.124457317s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.02s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-748423 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-748423 --network=bridge: (33.464390843s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-748423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-748423
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-748423: (1.93308538s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.42s)

                                                
                                    
x
+
TestKicExistingNetwork (32.11s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-652890 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-652890 --network=existing-network: (29.933468587s)
helpers_test.go:175: Cleaning up "existing-network-652890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-652890
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-652890: (2.017830578s)
--- PASS: TestKicExistingNetwork (32.11s)

                                                
                                    
x
+
TestKicCustomSubnet (33.75s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-442311 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-442311 --subnet=192.168.60.0/24: (31.719172324s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-442311 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-442311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-442311
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-442311: (2.004253617s)
--- PASS: TestKicCustomSubnet (33.75s)

                                                
                                    
x
+
TestKicStaticIP (36.65s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-108122 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-108122 --static-ip=192.168.200.200: (34.491554752s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-108122 ip
helpers_test.go:175: Cleaning up "static-ip-108122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-108122
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-108122: (2.000130595s)
--- PASS: TestKicStaticIP (36.65s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-043890 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-043890 --driver=docker  --container-runtime=containerd: (33.730593142s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-047024 --driver=docker  --container-runtime=containerd
E0913 18:52:34.991297  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-047024 --driver=docker  --container-runtime=containerd: (30.966094942s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-043890
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-047024
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-047024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-047024
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-047024: (2.044562448s)
helpers_test.go:175: Cleaning up "first-043890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-043890
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-043890: (1.970207413s)
--- PASS: TestMinikubeProfile (69.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-357848 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0913 18:53:09.498921  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-357848 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.054994142s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-357848 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-359752 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-359752 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.912248232s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-359752 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-357848 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-357848 --alsologtostderr -v=5: (1.619832067s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-359752 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-359752
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-359752: (1.203937337s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-359752
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-359752: (6.660349886s)
--- PASS: TestMountStart/serial/RestartStopped (7.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-359752 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881636 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0913 18:54:32.563981  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881636 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m5.212992614s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-881636 -- rollout status deployment/busybox: (13.693383963s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- exec busybox-7dff88458-ndszl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- exec busybox-7dff88458-rbdlc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- exec busybox-7dff88458-ndszl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- exec busybox-7dff88458-rbdlc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- exec busybox-7dff88458-ndszl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- exec busybox-7dff88458-rbdlc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- exec busybox-7dff88458-ndszl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- exec busybox-7dff88458-ndszl -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- exec busybox-7dff88458-rbdlc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881636 -- exec busybox-7dff88458-rbdlc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-881636 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-881636 -v 3 --alsologtostderr: (15.547055376s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-881636 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp testdata/cp-test.txt multinode-881636:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp multinode-881636:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2866755125/001/cp-test_multinode-881636.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp multinode-881636:/home/docker/cp-test.txt multinode-881636-m02:/home/docker/cp-test_multinode-881636_multinode-881636-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m02 "sudo cat /home/docker/cp-test_multinode-881636_multinode-881636-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp multinode-881636:/home/docker/cp-test.txt multinode-881636-m03:/home/docker/cp-test_multinode-881636_multinode-881636-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m03 "sudo cat /home/docker/cp-test_multinode-881636_multinode-881636-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp testdata/cp-test.txt multinode-881636-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp multinode-881636-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2866755125/001/cp-test_multinode-881636-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp multinode-881636-m02:/home/docker/cp-test.txt multinode-881636:/home/docker/cp-test_multinode-881636-m02_multinode-881636.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636 "sudo cat /home/docker/cp-test_multinode-881636-m02_multinode-881636.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp multinode-881636-m02:/home/docker/cp-test.txt multinode-881636-m03:/home/docker/cp-test_multinode-881636-m02_multinode-881636-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m03 "sudo cat /home/docker/cp-test_multinode-881636-m02_multinode-881636-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp testdata/cp-test.txt multinode-881636-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp multinode-881636-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2866755125/001/cp-test_multinode-881636-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp multinode-881636-m03:/home/docker/cp-test.txt multinode-881636:/home/docker/cp-test_multinode-881636-m03_multinode-881636.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636 "sudo cat /home/docker/cp-test_multinode-881636-m03_multinode-881636.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 cp multinode-881636-m03:/home/docker/cp-test.txt multinode-881636-m02:/home/docker/cp-test_multinode-881636-m03_multinode-881636-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 ssh -n multinode-881636-m02 "sudo cat /home/docker/cp-test_multinode-881636-m03_multinode-881636-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-881636 node stop m03: (1.226830564s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881636 status: exit status 7 (525.849609ms)

                                                
                                                
-- stdout --
	multinode-881636
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-881636-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-881636-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881636 status --alsologtostderr: exit status 7 (554.684586ms)

                                                
                                                
-- stdout --
	multinode-881636
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-881636-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-881636-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:55:24.599706  421575 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:55:24.599845  421575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:55:24.599887  421575 out.go:358] Setting ErrFile to fd 2...
	I0913 18:55:24.599893  421575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:55:24.600146  421575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
	I0913 18:55:24.600351  421575 out.go:352] Setting JSON to false
	I0913 18:55:24.600384  421575 mustload.go:65] Loading cluster: multinode-881636
	I0913 18:55:24.600499  421575 notify.go:220] Checking for updates...
	I0913 18:55:24.600812  421575 config.go:182] Loaded profile config "multinode-881636": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0913 18:55:24.600825  421575 status.go:255] checking status of multinode-881636 ...
	I0913 18:55:24.601349  421575 cli_runner.go:164] Run: docker container inspect multinode-881636 --format={{.State.Status}}
	I0913 18:55:24.625497  421575 status.go:330] multinode-881636 host status = "Running" (err=<nil>)
	I0913 18:55:24.625524  421575 host.go:66] Checking if "multinode-881636" exists ...
	I0913 18:55:24.625828  421575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-881636
	I0913 18:55:24.649820  421575 host.go:66] Checking if "multinode-881636" exists ...
	I0913 18:55:24.650140  421575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:55:24.650182  421575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-881636
	I0913 18:55:24.670713  421575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/multinode-881636/id_rsa Username:docker}
	I0913 18:55:24.773648  421575 ssh_runner.go:195] Run: systemctl --version
	I0913 18:55:24.778166  421575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:55:24.790057  421575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:55:24.859597  421575 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-13 18:55:24.849706311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:55:24.860247  421575 kubeconfig.go:125] found "multinode-881636" server: "https://192.168.67.2:8443"
	I0913 18:55:24.860289  421575 api_server.go:166] Checking apiserver status ...
	I0913 18:55:24.860337  421575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:55:24.871955  421575 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1446/cgroup
	I0913 18:55:24.882013  421575 api_server.go:182] apiserver freezer: "12:freezer:/docker/1b4ec06cd40f3d6287bf7eff4fe6dc324cdafa28e7fd97b900e63d8c75f9c14f/kubepods/burstable/pod1802d5bfbf6320c46e19ea650242c60a/adccdce2cf75827022ae724f5a8c0c5c7d3afff51b5b9e7ac3d558a73ecc7735"
	I0913 18:55:24.882086  421575 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1b4ec06cd40f3d6287bf7eff4fe6dc324cdafa28e7fd97b900e63d8c75f9c14f/kubepods/burstable/pod1802d5bfbf6320c46e19ea650242c60a/adccdce2cf75827022ae724f5a8c0c5c7d3afff51b5b9e7ac3d558a73ecc7735/freezer.state
	I0913 18:55:24.891782  421575 api_server.go:204] freezer state: "THAWED"
	I0913 18:55:24.891821  421575 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0913 18:55:24.899766  421575 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0913 18:55:24.899795  421575 status.go:422] multinode-881636 apiserver status = Running (err=<nil>)
	I0913 18:55:24.899806  421575 status.go:257] multinode-881636 status: &{Name:multinode-881636 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:55:24.899824  421575 status.go:255] checking status of multinode-881636-m02 ...
	I0913 18:55:24.900215  421575 cli_runner.go:164] Run: docker container inspect multinode-881636-m02 --format={{.State.Status}}
	I0913 18:55:24.917375  421575 status.go:330] multinode-881636-m02 host status = "Running" (err=<nil>)
	I0913 18:55:24.917409  421575 host.go:66] Checking if "multinode-881636-m02" exists ...
	I0913 18:55:24.917719  421575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-881636-m02
	I0913 18:55:24.934786  421575 host.go:66] Checking if "multinode-881636-m02" exists ...
	I0913 18:55:24.935114  421575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:55:24.935213  421575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-881636-m02
	I0913 18:55:24.958996  421575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/19636-294721/.minikube/machines/multinode-881636-m02/id_rsa Username:docker}
	I0913 18:55:25.065974  421575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:55:25.079377  421575 status.go:257] multinode-881636-m02 status: &{Name:multinode-881636-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:55:25.079430  421575 status.go:255] checking status of multinode-881636-m03 ...
	I0913 18:55:25.079800  421575 cli_runner.go:164] Run: docker container inspect multinode-881636-m03 --format={{.State.Status}}
	I0913 18:55:25.098206  421575 status.go:330] multinode-881636-m03 host status = "Stopped" (err=<nil>)
	I0913 18:55:25.098235  421575 status.go:343] host is not running, skipping remaining checks
	I0913 18:55:25.098272  421575 status.go:257] multinode-881636-m03 status: &{Name:multinode-881636-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-881636 node start m03 -v=7 --alsologtostderr: (8.922419398s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (94.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-881636
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-881636
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-881636: (24.994613352s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881636 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881636 --wait=true -v=8 --alsologtostderr: (1m9.519026619s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-881636
--- PASS: TestMultiNode/serial/RestartKeepsNodes (94.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-881636 node delete m03: (4.841738185s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 stop
E0913 18:57:34.991223  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-881636 stop: (23.959018417s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881636 status: exit status 7 (106.964769ms)

                                                
                                                
-- stdout --
	multinode-881636
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-881636-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881636 status --alsologtostderr: exit status 7 (102.068503ms)

                                                
                                                
-- stdout --
	multinode-881636
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-881636-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:57:39.106268  430040 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:57:39.106476  430040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:57:39.106505  430040 out.go:358] Setting ErrFile to fd 2...
	I0913 18:57:39.106529  430040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:57:39.106810  430040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
	I0913 18:57:39.107046  430040 out.go:352] Setting JSON to false
	I0913 18:57:39.107108  430040 mustload.go:65] Loading cluster: multinode-881636
	I0913 18:57:39.107216  430040 notify.go:220] Checking for updates...
	I0913 18:57:39.108338  430040 config.go:182] Loaded profile config "multinode-881636": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0913 18:57:39.108388  430040 status.go:255] checking status of multinode-881636 ...
	I0913 18:57:39.108983  430040 cli_runner.go:164] Run: docker container inspect multinode-881636 --format={{.State.Status}}
	I0913 18:57:39.126449  430040 status.go:330] multinode-881636 host status = "Stopped" (err=<nil>)
	I0913 18:57:39.126470  430040 status.go:343] host is not running, skipping remaining checks
	I0913 18:57:39.126478  430040 status.go:257] multinode-881636 status: &{Name:multinode-881636 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:57:39.126516  430040 status.go:255] checking status of multinode-881636-m02 ...
	I0913 18:57:39.126836  430040 cli_runner.go:164] Run: docker container inspect multinode-881636-m02 --format={{.State.Status}}
	I0913 18:57:39.158244  430040 status.go:330] multinode-881636-m02 host status = "Stopped" (err=<nil>)
	I0913 18:57:39.158264  430040 status.go:343] host is not running, skipping remaining checks
	I0913 18:57:39.158278  430040 status.go:257] multinode-881636-m02 status: &{Name:multinode-881636-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881636 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0913 18:58:09.498598  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881636 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.183547357s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881636 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-881636
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881636-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-881636-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.168324ms)

                                                
                                                
-- stdout --
	* [multinode-881636-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-881636-m02' is duplicated with machine name 'multinode-881636-m02' in profile 'multinode-881636'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881636-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881636-m03 --driver=docker  --container-runtime=containerd: (28.778058748s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-881636
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-881636: exit status 80 (318.656383ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-881636 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-881636-m03 already exists in multinode-881636-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-881636-m03
E0913 18:58:58.065212  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-881636-m03: (1.986206048s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.27s)

                                                
                                    
x
+
TestPreload (122.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-852800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-852800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m24.007671838s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-852800 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-852800 image pull gcr.io/k8s-minikube/busybox: (1.994059782s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-852800
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-852800: (12.166535623s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-852800 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-852800 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.988276427s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-852800 image list
helpers_test.go:175: Cleaning up "test-preload-852800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-852800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-852800: (2.488024142s)
--- PASS: TestPreload (122.04s)

                                                
                                    
x
+
TestScheduledStopUnix (105.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-461647 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-461647 --memory=2048 --driver=docker  --container-runtime=containerd: (29.460911662s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-461647 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-461647 -n scheduled-stop-461647
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-461647 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-461647 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-461647 -n scheduled-stop-461647
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-461647
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-461647 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0913 19:02:34.991218  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-461647
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-461647: exit status 7 (61.913046ms)

                                                
                                                
-- stdout --
	scheduled-stop-461647
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-461647 -n scheduled-stop-461647
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-461647 -n scheduled-stop-461647: exit status 7 (66.23728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-461647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-461647
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-461647: (4.641337036s)
--- PASS: TestScheduledStopUnix (105.96s)

                                                
                                    
x
+
TestInsufficientStorage (11.25s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-833576 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-833576 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.391097572s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9fb0d946-8f00-4fb3-a197-5dd60ea144ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-833576] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"614e5a37-9d19-4bf2-a52a-9021dd2dbb4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19636"}}
	{"specversion":"1.0","id":"7e221efe-9aa5-4e13-b059-55a40f1e43a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"377609f8-efd0-44f9-a4bc-22f9354d18f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig"}}
	{"specversion":"1.0","id":"3af5ad84-8155-4dcc-a142-bc9323df9503","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube"}}
	{"specversion":"1.0","id":"65fbd61b-3664-4b2a-981d-6f1d8a6abdf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e0d36e80-1a6b-414b-bfad-0896eaf62002","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5735700c-3e1c-4e33-a9ff-9ee1297e41ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3fa29824-6ddf-4f44-894c-070a0ae52d26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"295bf786-afd4-4bbd-8e1a-0f2512f50953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"86afc232-731c-4a9d-a991-abda4d93a422","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2d5ea612-a0d0-4dd7-ab3d-93aae90708a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-833576\" primary control-plane node in \"insufficient-storage-833576\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b0e4ee3-9c35-47a1-b142-cbf2f9730364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726193793-19634 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0dcd4cfb-dd72-46a2-b283-44003cc01619","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"776e6e62-8516-47b5-97f2-17b89b614b0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-833576 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-833576 --output=json --layout=cluster: exit status 7 (301.321329ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-833576","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-833576","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:02:59.514878  448655 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-833576" does not appear in /home/jenkins/minikube-integration/19636-294721/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-833576 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-833576 --output=json --layout=cluster: exit status 7 (307.281094ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-833576","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-833576","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:02:59.828765  448714 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-833576" does not appear in /home/jenkins/minikube-integration/19636-294721/kubeconfig
	E0913 19:02:59.839169  448714 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/insufficient-storage-833576/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-833576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-833576
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-833576: (2.249943499s)
--- PASS: TestInsufficientStorage (11.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3773612994 start -p running-upgrade-305379 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0913 19:08:09.498695  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3773612994 start -p running-upgrade-305379 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.037139265s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-305379 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-305379 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.121873953s)
helpers_test.go:175: Cleaning up "running-upgrade-305379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-305379
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-305379: (3.02603709s)
--- PASS: TestRunningBinaryUpgrade (82.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-947120 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-947120 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.519745006s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-947120
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-947120: (1.297285929s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-947120 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-947120 status --format={{.Host}}: exit status 7 (96.309888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-947120 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-947120 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m34.455959104s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-947120 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-947120 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-947120 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (87.046642ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-947120] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-947120
	    minikube start -p kubernetes-upgrade-947120 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9471202 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-947120 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-947120 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-947120 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.010382645s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-947120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-947120
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-947120: (2.485723101s)
--- PASS: TestKubernetesUpgrade (349.05s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1604379316 start -p missing-upgrade-011813 --memory=2200 --driver=docker  --container-runtime=containerd
E0913 19:03:09.498322  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1604379316 start -p missing-upgrade-011813 --memory=2200 --driver=docker  --container-runtime=containerd: (1m23.071589818s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-011813
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-011813: (10.291771051s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-011813
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-011813 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-011813 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.981170556s)
helpers_test.go:175: Cleaning up "missing-upgrade-011813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-011813
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-011813: (2.275024659s)
--- PASS: TestMissingContainerUpgrade (164.98s)

                                                
                                    
x
+
TestPause/serial/Start (92.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-011406 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-011406 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m32.268572297s)
--- PASS: TestPause/serial/Start (92.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-011406 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-011406 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.317993549s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.34s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-011406 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-011406 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-011406 --output=json --layout=cluster: exit status 2 (328.167113ms)

                                                
                                                
-- stdout --
	{"Name":"pause-011406","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-011406","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-011406 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-011406 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.57s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-011406 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-011406 --alsologtostderr -v=5: (2.567373979s)
--- PASS: TestPause/serial/DeletePaused (2.57s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-011406
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-011406: exit status 1 (23.96753ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-011406: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2621690425 start -p stopped-upgrade-755741 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2621690425 start -p stopped-upgrade-755741 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.877507483s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2621690425 -p stopped-upgrade-755741 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2621690425 -p stopped-upgrade-755741 stop: (19.974117552s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-755741 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0913 19:07:34.991558  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-755741 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.241514314s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-755741
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-971152 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-971152 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (90.178738ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-971152] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-971152 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-971152 --driver=docker  --container-runtime=containerd: (34.09405222s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-971152 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-971152 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-971152 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.700970653s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-971152 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-971152 status -o json: exit status 2 (420.12908ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-971152","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-971152
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-971152: (2.299088494s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-971152 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-971152 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.749067114s)
--- PASS: TestNoKubernetes/serial/Start (6.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-971152 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-971152 "sudo systemctl is-active --quiet service kubelet": exit status 1 (357.645773ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-748645 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-748645 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (233.012255ms)

                                                
                                                
-- stdout --
	* [false-748645] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:10:40.299775  488513 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:10:40.299967  488513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:10:40.299978  488513 out.go:358] Setting ErrFile to fd 2...
	I0913 19:10:40.299983  488513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:10:40.300212  488513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-294721/.minikube/bin
	I0913 19:10:40.300649  488513 out.go:352] Setting JSON to false
	I0913 19:10:40.301653  488513 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10388,"bootTime":1726244253,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0913 19:10:40.301749  488513 start.go:139] virtualization:  
	I0913 19:10:40.305556  488513 out.go:177] * [false-748645] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0913 19:10:40.308240  488513 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:10:40.308343  488513 notify.go:220] Checking for updates...
	I0913 19:10:40.312479  488513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:10:40.314461  488513 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-294721/kubeconfig
	I0913 19:10:40.316514  488513 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-294721/.minikube
	I0913 19:10:40.318119  488513 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0913 19:10:40.320288  488513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:10:40.322684  488513 config.go:182] Loaded profile config "NoKubernetes-971152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0913 19:10:40.322792  488513 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:10:40.353785  488513 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 19:10:40.353919  488513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 19:10:40.425864  488513 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-09-13 19:10:40.416267163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 19:10:40.425986  488513 docker.go:318] overlay module found
	I0913 19:10:40.428008  488513 out.go:177] * Using the docker driver based on user configuration
	I0913 19:10:40.430186  488513 start.go:297] selected driver: docker
	I0913 19:10:40.430204  488513 start.go:901] validating driver "docker" against <nil>
	I0913 19:10:40.430219  488513 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:10:40.433179  488513 out.go:201] 
	W0913 19:10:40.435683  488513 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0913 19:10:40.437621  488513 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-748645 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-748645" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-748645

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-748645"

                                                
                                                
----------------------- debugLogs end: false-748645 [took: 4.156523442s] --------------------------------
helpers_test.go:175: Cleaning up "false-748645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-748645
--- PASS: TestNetworkPlugins/group/false (4.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-971152
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-971152: (1.264021157s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-971152 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-971152 --driver=docker  --container-runtime=containerd: (8.212990078s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-971152 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-971152 "sudo systemctl is-active --quiet service kubelet": exit status 1 (379.681421ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (163.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-150959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0913 19:12:34.991128  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:13:09.498455  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-150959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m43.606177282s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (163.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-048437 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-048437 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m10.486128418s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-150959 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7d6aa9f0-b6c4-434c-aab2-7d6cd5440d50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7d6aa9f0-b6c4-434c-aab2-7d6cd5440d50] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.031340653s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-150959 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-150959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-150959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.596958105s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-150959 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-150959 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-150959 --alsologtostderr -v=3: (12.409130933s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-150959 -n old-k8s-version-150959
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-150959 -n old-k8s-version-150959: exit status 7 (117.015626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-150959 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (378.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-150959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0913 19:15:38.066569  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-150959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (6m17.451310625s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-150959 -n old-k8s-version-150959
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (378.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-048437 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e1c377a2-a4b3-4752-b438-83d5b2fdfca3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e1c377a2-a4b3-4752-b438-83d5b2fdfca3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003689324s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-048437 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-048437 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-048437 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.166109292s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-048437 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-048437 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-048437 --alsologtostderr -v=3: (12.961963868s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-048437 -n no-preload-048437
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-048437 -n no-preload-048437: exit status 7 (85.774448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-048437 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (277.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-048437 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0913 19:17:34.991919  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:18:09.498345  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-048437 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m37.503023427s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-048437 -n no-preload-048437
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (277.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xsrqw" [04ec9faa-ca23-49bb-8320-a0ffd703e0e4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004862182s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xsrqw" [04ec9faa-ca23-49bb-8320-a0ffd703e0e4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004460405s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-048437 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-048437 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-048437 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-048437 -n no-preload-048437
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-048437 -n no-preload-048437: exit status 2 (355.059606ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-048437 -n no-preload-048437
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-048437 -n no-preload-048437: exit status 2 (345.59227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-048437 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-048437 -n no-preload-048437
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-048437 -n no-preload-048437
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-341382 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-341382 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m33.585330301s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zhvj2" [7ae292ad-6e93-46b0-9784-c39c6926a856] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004680516s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zhvj2" [7ae292ad-6e93-46b0-9784-c39c6926a856] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01178062s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-150959 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-150959 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-150959 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-150959 --alsologtostderr -v=1: (1.316186183s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-150959 -n old-k8s-version-150959
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-150959 -n old-k8s-version-150959: exit status 2 (470.277258ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-150959 -n old-k8s-version-150959
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-150959 -n old-k8s-version-150959: exit status 2 (447.835162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-150959 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-150959 -n old-k8s-version-150959
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-150959 -n old-k8s-version-150959
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-856372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0913 19:22:34.991377  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-856372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (51.911929011s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-341382 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0d3524d0-943d-4cc9-b270-355c41512e49] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0d3524d0-943d-4cc9-b270-355c41512e49] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004484319s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-341382 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-856372 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [45adb00a-bf8b-4f99-bd7e-cccac35bde8e] Pending
helpers_test.go:344: "busybox" [45adb00a-bf8b-4f99-bd7e-cccac35bde8e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [45adb00a-bf8b-4f99-bd7e-cccac35bde8e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004409155s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-856372 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-341382 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-341382 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.020765953s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-341382 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-341382 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-341382 --alsologtostderr -v=3: (12.238675691s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-856372 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-856372 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.328951273s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-856372 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-856372 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-856372 --alsologtostderr -v=3: (12.036560095s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-341382 -n embed-certs-341382
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-341382 -n embed-certs-341382: exit status 7 (75.470195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-341382 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (292.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-341382 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-341382 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m52.410566817s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-341382 -n embed-certs-341382
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (292.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-856372 -n default-k8s-diff-port-856372
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-856372 -n default-k8s-diff-port-856372: exit status 7 (74.886747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-856372 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (273.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-856372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0913 19:23:09.498789  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:53.158724  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:53.165190  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:53.176538  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:53.198049  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:53.239418  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:53.321203  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:53.482588  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:53.804814  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:54.446596  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:55.728141  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:24:58.289948  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:03.412072  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:13.654112  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:34.135771  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:48.592623  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:48.598957  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:48.610311  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:48.631737  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:48.673269  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:48.754996  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:48.916291  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:49.237841  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:49.879944  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:51.162174  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:53.723480  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:25:58.845239  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:26:09.086880  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:26:15.097691  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:26:29.568906  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:27:10.530357  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:27:34.991447  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:27:37.019980  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-856372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m32.799439742s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-856372 -n default-k8s-diff-port-856372
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (273.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-d6dmk" [5bda5ba6-3630-4306-9ab3-641d4d4437c3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004383181s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-d6dmk" [5bda5ba6-3630-4306-9ab3-641d4d4437c3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003507257s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-856372 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-856372 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-856372 --alsologtostderr -v=1
E0913 19:27:52.568218  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-856372 -n default-k8s-diff-port-856372
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-856372 -n default-k8s-diff-port-856372: exit status 2 (325.986114ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-856372 -n default-k8s-diff-port-856372
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-856372 -n default-k8s-diff-port-856372: exit status 2 (338.289695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-856372 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-856372 -n default-k8s-diff-port-856372
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-856372 -n default-k8s-diff-port-856372
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-72tgk" [1bb88196-14d3-4ec8-8bb2-7bd9c6b0653e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006121353s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-498474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-498474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (42.063108506s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-72tgk" [1bb88196-14d3-4ec8-8bb2-7bd9c6b0653e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003596213s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-341382 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-341382 image list --format=json
E0913 19:28:09.498338  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-341382 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-341382 --alsologtostderr -v=1: (1.111034092s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-341382 -n embed-certs-341382
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-341382 -n embed-certs-341382: exit status 2 (397.077473ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-341382 -n embed-certs-341382
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-341382 -n embed-certs-341382: exit status 2 (426.569645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-341382 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-341382 -n embed-certs-341382
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-341382 -n embed-certs-341382
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0913 19:28:32.453926  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m24.718095039s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-498474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-498474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.34828613s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-498474 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-498474 --alsologtostderr -v=3: (1.30132358s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-498474 -n newest-cni-498474
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-498474 -n newest-cni-498474: exit status 7 (85.866115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-498474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-498474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-498474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (20.288938606s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-498474 -n newest-cni-498474
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-498474 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-498474 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-498474 -n newest-cni-498474
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-498474 -n newest-cni-498474: exit status 2 (325.254323ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-498474 -n newest-cni-498474
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-498474 -n newest-cni-498474: exit status 2 (324.067022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-498474 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-498474 -n newest-cni-498474
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-498474 -n newest-cni-498474
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.03s)
E0913 19:34:42.880249  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:42.886632  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:42.898006  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:42.919429  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:42.960873  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:43.042320  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:43.203710  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:43.525437  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:44.167393  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:45.448773  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m30.655154591s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-748645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-748645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xdjzx" [6aae51cb-cec7-40db-9e3e-940bae93d836] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xdjzx" [6aae51cb-cec7-40db-9e3e-940bae93d836] Running
E0913 19:29:53.158863  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.008042051s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-748645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0913 19:30:20.861697  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m8.778362997s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xcx4k" [dac885c2-d9bc-4d46-a2f9-14c85834106c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004863592s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-748645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-748645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zhtmm" [ac8d5059-6bfd-4b52-bf4f-424a3570c2e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0913 19:30:48.592385  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/no-preload-048437/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-zhtmm" [ac8d5059-6bfd-4b52-bf4f-424a3570c2e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004018074s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-748645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.954241595s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hkkhl" [a4623077-34cf-40cb-b970-0d436e4c1b9e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.010477905s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-748645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-748645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lzqkc" [d2a976b5-156c-407c-bca0-ff2a0d3d04fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lzqkc" [d2a976b5-156c-407c-bca0-ff2a0d3d04fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004886035s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-748645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0913 19:32:18.068394  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/functional-910777/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (50.94940968s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-748645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-748645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-glc4b" [fb1d1773-cd28-4b5a-b6fa-fe8f13e9f685] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-glc4b" [fb1d1773-cd28-4b5a-b6fa-fe8f13e9f685] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.007816517s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-748645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0913 19:32:55.716697  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/default-k8s-diff-port-856372/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.893512453s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-748645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-748645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b4zv9" [cb54b257-b598-4dcd-9c6b-f10077149940] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0913 19:33:05.958793  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/default-k8s-diff-port-856372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-b4zv9" [cb54b257-b598-4dcd-9c6b-f10077149940] Running
E0913 19:33:09.498256  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/addons-365496/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005521744s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-748645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-748645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m9.277034847s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qjfdf" [87a42efe-5f11-4369-b07c-9a24db9ebd8d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005430198s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-748645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-748645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kjk46" [282e0559-9dab-4c35-b28d-9450f04330e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kjk46" [282e0559-9dab-4c35-b28d-9450f04330e7] Running
E0913 19:34:07.405345  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/default-k8s-diff-port-856372/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003914574s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-748645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-748645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-748645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nxxgf" [d55c3573-718c-48de-9824-036a48b3d0d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0913 19:34:48.010353  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nxxgf" [d55c3573-718c-48de-9824-036a48b3d0d3] Running
E0913 19:34:53.132496  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/auto-748645/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:53.159113  300115 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-294721/.minikube/profiles/old-k8s-version-150959/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.00437393s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-748645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-748645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-875542 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-875542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-875542
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-232603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-232603
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-748645 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-748645" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-748645

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-748645"

                                                
                                                
----------------------- debugLogs end: kubenet-748645 [took: 4.755195383s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-748645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-748645
--- SKIP: TestNetworkPlugins/group/kubenet (4.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-748645 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-748645" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-748645

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-748645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748645"

                                                
                                                
----------------------- debugLogs end: cilium-748645 [took: 4.305303713s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-748645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-748645
--- SKIP: TestNetworkPlugins/group/cilium (4.51s)

                                                
                                    
Copied to clipboard